I am trying to read data using the Smartsheet API from Java to create different formats, such as reports & labels with the data from one row.
I've set up my IDE (NetBeans) so that the API samples work for me, but they are all about creating new sheets etc and I can not figure out how to read the contents of an existing sheet.
I would have thought that I could read the entire sheet into a java object in one line of code, but it appears more complicated than that, and I can not find any applicable documentation anywhere. The Javadoc does not say where/how to get the relevant IDs, what any of the inclusion or exclusion objects actually do, or which are required or optional etc.
Are there any examples of reading the contents of a sheet from java available?
I know that this is a bit of a broad question, but I'm totally stumped.
Thanks Kim!
For others, here's what worked for me. This code gets a list of the sheets in my account and displays the contents those with names starting with "Specs - " :
import com.smartsheet.api.Smartsheet;
import com.smartsheet.api.SmartsheetBuilder;
import com.smartsheet.api.SmartsheetException;
import com.smartsheet.api.models.Cell;
import com.smartsheet.api.models.Column;
import com.smartsheet.api.models.PagedResult;
import com.smartsheet.api.models.Row;
import com.smartsheet.api.models.Sheet;
import com.smartsheet.api.oauth.Token;
import java.util.List;
import java.util.Objects;
import java.util.logging.Level;
import java.util.logging.Logger;
public class SampleCode {
/*
Show the contenst of all sheets whose name starts with "Specs - "
*/
public static void main(String[] args) {
final String delimiter = ", ";
// Create a Smartsheet object with our Access Token
Token token = new Token();
token.setAccessToken(Private.TOKEN);
Smartsheet smartsheet = new SmartsheetBuilder().setAccessToken(token.getAccessToken()).build();
//get a paged list of all Sheets, using null Source Inclusion & Pagination parameters
PagedResult<Sheet> homeSheets = new PagedResult<>();
try {
homeSheets = smartsheet.sheetResources().listSheets(null, null);
} catch (SmartsheetException ex) {
Logger.getLogger(SampleCode.class.getName()).log(Level.SEVERE, null, ex);
}
// get a Java List<Sheet> from the PagedResult<Sheet>
List<Sheet> sheetInfoList = homeSheets.getData();
// Loop through each sheet in the list
for (Sheet sheetInfo : sheetInfoList) {
String sheetName = sheetInfo.getName();
// Show data for all sheets with names that match our pattern
if (sheetName.startsWith("Specs - ")) {
// get the sheet object, with no optional includes or excludes
Sheet theSheet = null;
try {
theSheet = smartsheet.sheetResources().getSheet(sheetInfo.getId(), null, null, null, null, null, null, null);
} catch (SmartsheetException ex) {
Logger.getLogger(SampleCode.class.getName()).log(Level.SEVERE, null, ex);
}
// Print the sheets name
System.out.println("\nSheet: " + theSheet.getName() + "\n");
// Print the column titles as a delimited line of text.
List<Column> columnList = theSheet.getColumns();
String columnHeader = null;
for (Column col : columnList) {
columnHeader = columnHeader == null ? col.getTitle() : columnHeader + delimiter + col.getTitle();
}
System.out.println(columnHeader);
// Print each row as a delimited line of text.
List<Row> rowList = theSheet.getRows();
for (Row row : rowList) {
List<Cell> cellList = row.getCells();
String rowOutput = null;
for (Cell cell : cellList) {
String cellOutput = Objects.toString(cell.getValue() != null ? cell.getValue() : cell.getDisplayValue());
rowOutput = rowOutput == null ? cellOutput : rowOutput + delimiter + cellOutput;
}
System.out.println(rowOutput);
}
}
}
}
}
The Smartsheet API documentation contains sample code that shows how to use the Java SDK. The Java Sample Code section in the docs describes how to establish the connection, etc. Then, each operation within in the "API Reference" section shows sample code (in the "Java" tab of the panel on the right side of the page) for executing the operation using the Java SDK.
To get Sheet data you'll use the "Get Sheet" operation. As described in the API docs, here's the Java SDK sample code for that operation:
// Get sheet (omit all parameters).
smartsheet.sheetResources().getSheet(sheetId, null, null, null, null, null, null, null);
The "sheetId" parameter should be the ID of the Sheet which you want to retrieve. You can either get this ID programmatically (i.e., by using an operation like "List Sheets", for example) -- or you can get it manually via the Smartsheet UI as described in this help article. The other parameters (all set to "null" in the sample code) represent the 7 parameters described in the API documentation for this operation. I imagine intellisense in your IDE should indicate the sequence of those parameters expected by the getSheet function, as well as valid values for each parameter (but the API docs will explain the meaning of each parameter).
Related
Below is the code written by my collegue who doesnt work in the firm anymore. I am inserting records in object with data loader and I can see success message but I do not see any records in my object. I am not able to understand what below trigger is doing.Please someone help me understand as I am new to salesforce.
trigger DataLoggingTrigger on QMBDataLogging__c (after insert) {
Map<string,Schema.RecordTypeInfo> recordTypeInfo = Schema.SObjectType.QMB_Initial_Letter__c.getRecordTypeInfosByName();
List<QMBDataLogging__c> logList = (List<QMBDataLogging__c>)Trigger.new;
List<Sobject> sobjList = (List<Sobject>)Type.forName('List<'+'QMB_Initial_Letter__c'+'>').newInstance();
Map<string, QMBLetteTypeToVfPage__c> QMBLetteTypeToVfPage = QMBLetteTypeToVfPage__c.getAll();
Map<String,QMBLetteTypeToVfPage__c> mapofLetterTypeRec = new Map<String,QMBLetteTypeToVfPage__c>();
set<Id>processdIds = new set<Id>();
for(string key : QMBLetteTypeToVfPage.keyset())
{
if(!mapofLetterTypeRec.containsKey(key)) mapofLetterTypeRec.put(QMBLetteTypeToVfPage.get(Key).Letter_Type__c, QMBLetteTypeToVfPage.get(Key));
}
for(QMBDataLogging__c log : logList)
{
Sobject logRecord = (sobject)log;
Sobject QMBLetterRecord = new QMB_Initial_Letter__c();
if(mapofLetterTypeRec.containskey(log.Field1__c))
{
string recordTypeId = recordTypeInfo.get(mapofLetterTypeRec.get(log.Field1__c).RecordType__c).isAvailable() ? recordTypeInfo.get(mapofLetterTypeRec.get(log.Field1__c).RecordType__c).getRecordTypeId() : recordTypeInfo.get('Master').getRecordTypeId();
string fieldApiNames = mapofLetterTypeRec.containskey(log.Field1__c) ? mapofLetterTypeRec.get(log.Field1__c).FieldAPINames__c : '';
//QMBLetterRecord.put('Letter_Type__c',log.Name);
QMBLetterRecord.put('RecordTypeId',tgh);
processdIds.add(log.Id);
if(string.isNotBlank(fieldApiNames) && fieldApiNames.contains(','))
{
Integer i = 1;
for(string fieldApiName : fieldApiNames.split(','))
{
string logFieldApiName = 'Field'+i+'__c';
fieldApiName = fieldApiName.trim();
system.debug('fieldApiName=='+fieldApiName);
Schema.DisplayType fielddataType = getFieldType('QMB_Initial_Letter__c',fieldApiName);
if(fielddataType == Schema.DisplayType.Date)
{
Date dateValue = Date.parse(string.valueof(logRecord.get(logFieldApiName)));
QMBLetterRecord.put(fieldApiName,dateValue);
}
else if(fielddataType == Schema.DisplayType.DOUBLE)
{
string value = (string)logRecord.get(logFieldApiName);
Double dec = Double.valueOf(value.replace(',',''));
QMBLetterRecord.put(fieldApiName,dec);
}
else if(fielddataType == Schema.DisplayType.CURRENCY)
{
Decimal decimalValue = Decimal.valueOf((string)logRecord.get(logFieldApiName));
QMBLetterRecord.put(fieldApiName,decimalValue);
}
else if(fielddataType == Schema.DisplayType.INTEGER)
{
string value = (string)logRecord.get(logFieldApiName);
Integer integerValue = Integer.valueOf(value.replace(',',''));
QMBLetterRecord.put(fieldApiName,integerValue);
}
else if(fielddataType == Schema.DisplayType.DATETIME)
{
DateTime dateTimeValue = DateTime.valueOf(logRecord.get(logFieldApiName));
QMBLetterRecord.put(fieldApiName,dateTimeValue);
}
else
{
QMBLetterRecord.put(fieldApiName,logRecord.get(logFieldApiName));
}
i++;
}
}
}
sobjList.add(QMBLetterRecord);
}
if(!sobjList.isEmpty())
{
insert sobjList;
if(!processdIds.isEmpty()) DeleteDoAsLoggingRecords.deleteTheProcessRecords(processdIds);
}
Public static Schema.DisplayType getFieldType(string objectName,string fieldName)
{
SObjectType r = ((SObject)(Type.forName('Schema.'+objectName).newInstance())).getSObjectType();
DescribeSObjectResult d = r.getDescribe();
return(d.fields.getMap().get(fieldName).getDescribe().getType());
}
}
You might be looking in the wrong place. Check if there's an unit test written for this thing (there should be one, especially if it's deployed to production), it should help you understand how it's supposed to be used.
You're inserting records of QMBDataLogging__c but then it seems they're immediately deleted in DeleteDoAsLoggingRecords.deleteTheProcessRecords(processdIds). Whether whatever this thing was supposed to do succeeds or not.
This seems to be some poor man's CSV parser or generic "upload anything"... that takes data stored in QMBDataLogging__c and creates QMB_Initial_Letter__c out of it.
QMBLetteTypeToVfPage__c.getAll() suggests you could go to Setup -> Custom Settings, try to find this thing and examine. Maybe it has some values in production but in your sandbox it's empty and that's why essentially nothing works? Or maybe some values that are there are outdated?
There's some comparison if what you upload into Field1__c can be matched to what's in that custom setting. I guess you load some kind of subtype of your QMB_Initial_Letter__c in there. Record Type name and list of fields to read from your log record is also fetched from custom setting based on that match.
Then this thing takes what you pasted, looks at the list of fields in from the custom setting and parses it.
Let's say the custom setting contains something like
Name = XYZ, FieldAPINames__c = 'Name,SomePicklist__c,SomeDate__c,IsActive__c'
This thing will look at first record you inserted, let's say you have the CSV like that
Field1__c,Field2__c,Field3__c,Field4__c
XYZ,Closed,2022-09-15,true
This thing will try to parse and map it so eventually you create record that a "normal" apex code would express as
new QMB_Initial_Letter__c(
Name = 'XYZ',
SomePicklist__c = 'Closed',
SomeDate__c = Date.parse('2022-09-15'),
IsActive__c = true
);
It's pretty fragile, as you probably already know. And because parsing CSV is an art - I expect it to absolutely crash and burn when text with commas in it shows up (some text,"text, with commas in it, should be quoted",more text).
In theory admin can change mapping in setup - but then they'd need to add new field anyway to the loaded file. Overcomplicated. I guess somebody did it to solve issue with Record Type Ids - but there are better ways to achieve that and still have normal CSV file with normal columns and strong type matching, not just chucking everything in as strings.
In theory this lets you have "jagged" csv files (row 1 having 5 fields, row 2 having different record type and 17 fields? no problem)
Your call whether it's salvageable or you'd rather ditch it and try normal loading of QMB_Initial_Letter__c records. (get back to your business people and ask for requirements?) If you do have variable number of columns at source - you'd need to standardise it or group the data so only 1 "type" of records (well, whatever's in that "Field1__c") goes into each file.
I want to merge multiple ppts. I use POI realize most functions, but there are still some problems. Some elements are not generated. I tested several groups of ppts.
Case 1: If there is only one slide in the PPT, the result is right. If there are multiple slides, will throw exception.
Below is the exception stack:
java.lang.ClassCastException: org.apache.poi.ooxml.POIXMLDocumentPart cannot be cast to org.apache.poi.xslf.usermodel.XSLFPictureData
at org.apache.poi.xslf.usermodel.XSLFSheet.importBlip(XSLFSheet.java:649)
at org.apache.poi.xslf.usermodel.XSLFPictureShape.copy(XSLFPictureShape.java:378)
at org.apache.poi.xslf.usermodel.XSLFSheet.wipeAndReinitialize(XSLFSheet.java:454)
at org.apache.poi.xslf.usermodel.XSLFSheet.importContent(XSLFSheet.java:433)
at org.apache.poi.xslf.usermodel.XSLFSlide.importContent(XSLFSlide.java:294)
at com.office.MergingMultiplePresentations.main(MergingMultiplePresentations.java:38)
Case 2: I tested another PPT, and when I opened it, it prompted “there is a problem with the content, you can try to repair it“”. When I click repair, the some slide of the PPT was deleted. Is there something that hasn't been copied?
Here is my code:
XMLSlideShow ppt = new XMLSlideShow();
//taking the two presentations that are to be merged
String path = "E:\\prj\\test\\";
String file1 = "1.pptx";
String file2 = "2.pptx";
String[] inputs = {file1,file2};
for(String arg : inputs){
FileInputStream inputstream = new FileInputStream(path+arg);
XMLSlideShow src = new XMLSlideShow(inputstream);
for(XSLFSlide srcSlide : src.getSlides()) {
try {
XSLFSlideLayout srcLayout = srcSlide.getSlideLayout();
XSLFSlideMaster srcMaster = srcSlide.getSlideMaster();
XSLFSlide slide = ppt.createSlide();
XSLFSlideLayout layout = slide.getSlideLayout();
XSLFSlideMaster master = slide.getSlideMaster();
layout.importContent(srcLayout);
master.importContent(srcMaster);
slide.importContent(srcSlide);
}
catch (Exception e){
e.printStackTrace();
}
}
}
String file3 = "3.pptx";
//creating the file object
FileOutputStream out = new FileOutputStream(path+file3);
// saving the changes to a file
ppt.write(out);
out.close();
The operation of merging presentations using POI looks a bit cumbersome due to the fact that you have to take care of the layouts and masters yourself. It's easier to use Aspose.Slides for Java for this. The following code example shows you how to merge presentations using that library. Slide layouts and slide masters will be merged automatically.
String file1 = "1.pptx";
String file2 = "2.pptx";
String[] inputs = {file1, file2};
// Prepare a new empty presentation.
Presentation ppt = new Presentation();
ppt.getSlides().removeAt(0); // removes the first empty slide
ppt.getSlideSize().setSize(SlideSizeType.Widescreen, SlideSizeScaleType.Maximize);
// Merge the input presentations.
for (String file : inputs) {
Presentation source = new Presentation(file);
for (ISlide slide : source.getSlides()) {
ppt.getSlides().addClone(slide);
}
source.dispose();
}
ppt.save("3.pptx", SaveFormat.Pptx);
ppt.dispose();
This is a paid product, but you can get a temporary license to try it out.
Alternatively, you could use Aspose.Slides Cloud SDK for Java. This product provides a REST-based API that allows you to make 150 free API calls per month for API learning and presentation processing. The following code example shows you how to do the same using Aspose.Slides Cloud:
SlidesApi slidesApi = new SlidesApi("my_client_id", "my_client_secret");
String file1 = "1.pptx";
String file2 = "2.pptx";
String outFile = "3.pptx";
// Prepare a new empty presentation.
slidesApi.createPresentation(outFile, null, null, null, null, null);
slidesApi.deleteSlide(outFile, 1, null, null, null); // removes the first empty slide
SlideProperties slideProperties = new SlideProperties();
slideProperties.setSizeType(SlideProperties.SizeTypeEnum.WIDESCREEN);
slideProperties.setScaleType(SlideProperties.ScaleTypeEnum.MAXIMIZE);
slidesApi.setSlideProperties(outFile, slideProperties, null, null, null);
// Merge the input presentations.
PresentationsMergeRequest mergeRequest = new PresentationsMergeRequest();
mergeRequest.setPresentationPaths(Arrays.asList(file1, file2));
slidesApi.merge(outFile, mergeRequest, null, null, null);
Sometimes it is necessary to merge presentations without any code. For such cases, you can use the free Aspose Online Merger.
I work as a Support Developer at Aspose.
I am using iText7 and I want to extract all the texts from a pdf and put html tag for bold ( <b>...</b> ) around all the words that uses bold fonts and save it in text file. Any pointers? I am able to independently extract text and also extract all the bold words but not able to co-relate the two.
Here is the code snippet I am using for extracting the text:
PdfDocument MyDocument = new PdfDocument(new PdfReader("C:\\MyTest.pdf"));
string MyText = PdfTextExtractor.GetTextFromPage(MyDocument.GetPage(1), new
SimpleTextExtractionStrategy());
Here is the code I am using for extracting all the words using the bold font:
MyRectangle = new Rectangle(0, 0, 50, 100);
CustomFontFilter fontFilter = new CustomFontFilter(MyRectangle);
FilteredEventListener listener = new FilteredEventListener();
LocationTextExtractionStrategy extractionStrategy =
listener.AttachEventListener(new LocationTextExtractionStrategy(), fontFilter);
PdfCanvasProcessor parser = new PdfCanvasProcessor(listener);
parser.ProcessPageContent(MyDocument.GetPage(1));
String MyBoldTextList = extractionStrategy.GetResultantText();
//------
class CustomFontFilter : TextRegionEventFilter
{
public CustomFontFilter(iText.Kernel.Geom.Rectangle filterRect) : base(filterRect){ }
override public bool Accept(IEventData data, EventType type)
{
if (type == EventType.RENDER_TEXT){
TextRenderInfo renderInfo = (TextRenderInfo)data;
PdfFont font = renderInfo.GetFont();
if (font!=null)
return font.GetFontProgram().GetFontNames().GetFontName().Contains("Bold");
}
return false;
}
}
The problem is that the pdf in question here is a multi-column document. SimpleTextExtractionStrategy brings the text in perfect order but if I use the LocationStrategy, it messes up texts by jumping from one column to next column in each line. I am not able to find any way to get the list of bold words using SimpleTextExtractionStrategy. In LocationStrategy, the list that I get is not in the right order so I am unable to co-relate it.
So to summarize:
You want to extract all the text from a pdf and put the html tag for bold (<b>...</b>) around all the text that uses bold fonts.
Your PDFs allow normal text extraction (without those <b> tags) using the SimpleTextExtractionStrategy. The LocationTextExtractionStrategy on the other hand cannot be used as it messes up the order of the multi-column text.
Bold text in your PDFs can properly be recognized by your CustomFontFilter, i.e. by the
font.GetFontProgram().GetFontNames().GetFontName().Contains("Bold")
condition.
Thus, one way to implement your task would be to extend the SimpleTextExtractionStrategy to check every chunk received using the CustomFontFilter condition and insert <b> tags where required.
For example like this:
public class BoldTaggingSimpleTextExtractionStrategy : SimpleTextExtractionStrategy
{
FieldInfo textField = typeof(TextRenderInfo).GetField("text", BindingFlags.NonPublic | BindingFlags.Instance);
bool currentlyBold = false;
public override void EventOccurred(IEventData data, EventType type)
{
if (type.Equals(EventType.RENDER_TEXT))
{
TextRenderInfo renderInfo = (TextRenderInfo)data;
string fontName = renderInfo.GetFont()?.GetFontProgram()?.GetFontNames()?.GetFontName();
if (fontName != null && fontName.Contains("Bold"))
{
if (!currentlyBold)
{
textField.SetValue(renderInfo, "<b>" + renderInfo.GetText());
currentlyBold = true;
}
}
else if (currentlyBold)
{
AppendTextChunk("</b>");
currentlyBold = false;
}
}
base.EventOccurred(data, type);
}
}
As you see I used reflection here. I did so because (A) TextRenderInfo does not allow public setting of the text and (B) AppendTextChunk must not be used before the first chunk is processed by base.EventOccurred - there the size of a StringBuilder containing the collected text chunks is used to check whether the chunk currently processed is the first one or not; if something is in that builder before at least one chunk has been processed, one gets a NullReferenceException. There are other work-arounds for that but reflection here means but one more line of code.
I'm using metadata-extractor to retrieve metadata from video files. I have it successfully retrieving the directories. Now I need to query the directories for specific info -- duration, height, etc.
The metadata-extractor docs give this example of how to query for a specific tag value:
// obtain the Exif directory
ExifSubIFDDirectory directory
= metadata.getFirstDirectoryOfType(ExifSubIFDDirectory.class);
// query the tag's value
Date date
= directory.getDate(ExifSubIFDDirectory.TAG_DATETIME_ORIGINAL);
So it appears I need to get a list of the relevant tags, such as TAG_DATETIME_ORIGINAL, for duration, height, etc.
This page in the metadata-extractor docs contains a link titled "the various tag values", but the page it goes to lists tags for still images only, not for video files.
Googling for Metadata-Extractor -- Complete List of All Tags does not seem to bring up a list of all tags.
Are the metadata-extractor docs really missing a list of tags, or am I approaching this the wrong way somehow?
I found a list of tags at:
https://developer.tizen.org/dev-guide/2.3.1/org.tizen.guides/html/native/multimedia/metadata_extractor_n.htm
However, those constants don't seem to be what's needed in actual code. Here's Java code that works:
import com.drew.imaging.ImageMetadataReader;
import com.drew.metadata.Directory;
import com.drew.metadata.Metadata;
import com.drew.metadata.Tag;
import com.drew.metadata.file.FileTypeDirectory;
import com.drew.metadata.mp4.Mp4Directory;
import com.drew.metadata.mp4.media.Mp4SoundDirectory;
import com.drew.metadata.mp4.media.Mp4VideoDirectory;
[.....]
Metadata theMetadata = null;
try {
InputStream stream = new URL(theVideoInfo.getLinkToVideo()).openStream();
theMetadata = ImageMetadataReader.readMetadata(stream);
}
} catch (java.lang.Exception exception) {
exception.printStackTrace();
}
Mp4SoundDirectory soundDirectory
= theMetadata.getFirstDirectoryOfType(Mp4SoundDirectory.class);
Mp4VideoDirectory videoDirectory
= theMetadata.getFirstDirectoryOfType(Mp4VideoDirectory.class);
Mp4Directory mp4Directory
= theMetadata.getFirstDirectoryOfType(Mp4Directory.class);
FileTypeDirectory fileTypeDirectory
= theMetadata.getFirstDirectoryOfType(FileTypeDirectory.class);
String numberOfAudioChannels
= soundDirectory.getString(Mp4SoundDirectory.TAG_NUMBER_OF_CHANNELS);
String duration = mp4Directory.getString(Mp4Directory.TAG_DURATION);
String frameRate = videoDirectory.getString(Mp4VideoDirectory.TAG_FRAME_RATE);
String height = videoDirectory.getString(Mp4VideoDirectory.TAG_HEIGHT);
String width = videoDirectory.getString(Mp4VideoDirectory.TAG_WIDTH);
String type = fileTypeDirectory.getString(FileTypeDirectory.TAG_DETECTED_FILE_MIME_TYPE);
I found the constants (TAG_HEIGHT, TAG_WIDTH, etc.) by directly examining the metadata-extractor objects in the debugger. For example, I'd type:
Mp4VideoDirectory.WIDTH
...and the debugger (IntelliJ) would auto-complete the available constants that had the text "WIDTH" in them.
I've read all the documentation I can find and watched all the videos I can find and don't understand how to do this. I have set up an xPages REST Service and it works well. Now I want to place the results of the service into either a combobox or typeahead text field. Ideally I would like to know how to do it for both types of fields.
I have an application which has a view containing a list of countries, another view containing a list of states, and another containing a list of cities. I would like the first field to only display the countries field from the list of data it returns in the XPages REST Service. Then, depending upon which country was selected, I would like the states for that country to be listed in another field for selection, etc.
I can see code for calling the REST Service results from a button, or from a dojo grid, but I cannot find how to call it to populate either of the types of fields identified above.
Where would I call the Service for the field? I had thought it would go in the Data area, but perhaps I've just not found the right syntax to use.
November 6, 2017:
I have been following your suggestion, but am still lost as can be. Here's what I currently have in my code:
x$( "#{id:ApplCountry}" ).select2({
placeholder: "select a country",
minimumInputLength: 2,
allowClear : true,
multiple: false,
ajax: {
dataType: 'text/plain',
url: "./Application.xsp/gridData",
quietMillis: 250,
data: function (params) {
return {
search:'[name=]*'+params.term+'*',
page: params.page
};
},
processResults: function (data, page) {
var data = $.map(data, function (obj) {
obj.id = obj.id || obj["#entityid"];
obj.text = obj.text || obj.name;
return obj;
});
},
return {results: data};
}
}
});
I'm using the dataType of 'text/plain' because that was what I understood I should use when gathering data from a domino application. I have tried changing this to json but it makes no difference.
I'm using processResults because I understand this is what should be used in version 4 of select2.
I don't understand the whole use of the hidden field, so I've stayed away from that.
No matter what I do, although my REST service works if I put it directly in the url, I cannot get any data to display in the field. All I want to display in the field is the country code of the document, which is in the field named "name" (not my choice, it's how it came before I imported the data from MySQL.
I have read documentation and watched videos, but still don't really understand how everything fits together. That was my problem with the REST service. If you use it in Dojo, you just put the name of the service in a field on the Dojo element and it's done, so I don't understand why all the additional coding for another type of domino element. Shouldn't it work the same way?
I should point out that at some points it does display the default message, so it does find the field. Just doesn't display the country selections.
I think the issue may be that you are not returning SelectItems to your select2, and that is what it is expecting. When I do something like you are trying, I actually use a bean to generate the selection choices. You may want to try that or I'm putting in the working part of my bean below.
The Utils.getItemValueAsString is a method I use to return either the string value of a field, or if it is not on the document/empty/null an empty string. I took out an if that doesn't relate to this, so there my be a mismatch, but I hope not.
You might be able to jump directly to populating the arrayList, but as I recall I needed to leverage the LinkedHashMap for something.
You should be able to do the same using SSJS, but since that renders to Java before executing, I find this more efficient.
For label/value pairs:
LinkedHashMap lhmap = new LinkedHashMap();
Document doc = null;
Document tmpDoc = null;
allObjects.addElement(doc);
if (dc.getCount() > 0) {
doc = dc.getFirstDocument();
while (doc != null) {
lhmap.put(Utils.getItemValueAsString(doc, LabelField, true), Utils.getItemValueAsString(doc, ValueField, true));
}
tmpDoc = dc.getNextDocument(doc);
doc.recycle();
doc = tmpDoc;
}
}
List<SelectItem> options = new ArrayList<SelectItem>();
Set set = lhmap.entrySet();
Iterator hsItr = set.iterator();
while (hsItr.hasNext()) {
Map.Entry me = (Map.Entry) hsItr.next();
// System.out.println("after: " + hStr);
SelectItem option = new SelectItem();
option.setLabel(me.getKey() + "");
option.setValue(me.getValue() + "");
options.add(option);
}
System.out.println("About to return from generating");
return options;
}
I ended up using straight up SSJS. Worked like a charm - very simple.