4043 error from SELECTION-LIST - progress-4gl

I have a SELECTION-LIST defined as:
DEFINE VARIABLE sel_TPlate AS CHARACTER
VIEW-AS SELECTION-LIST MULTIPLE
SCROLLBAR-HORIZONTAL SCROLLBAR-VERTICAL
SIZE 36 BY 17.86
FONT 60 NO-UNDO.
The list contains hundreds of items with each item ranging from 10 - 40 characters. When a user selects multiple items, they are stored in a character variable.
DEFINE VARIABLE listItems AS CHARACTER NO-UNDO.
listItems = sel_TPlate:SCREEN-VALUE.
I understand that this error is caused by exceeding the 32k limit, but I am unsure of the best way to solve this problem. I have attempted to change the variable and list to a LONGCHAR, but this does not solve the issue. Any feedback is appreciated. Thanks!

Selection lists are appropriate for relatively small collections of data. Not for picking hundreds of items.
Instead of using a SELECTION-LIST you should be using a BROWSE associated with a temp-table where each selection is a row in the TT.
A temp-table & browse combination is only limited by available memory and will overflow to disk if necessary.

Related

Group data based on different size

I am trying to make a report in which I need to show data based on Width groups. Below is an example of data & the required output. I'm unable to make a group which can give this required output. If someone can help please.
The easiest way to accomplish your grouping needs for this data set would be to create a new Formula Field that evaluates the values of the Width data field for each record to determine which group the record belongs within, then do the grouping on this new formula field.
You formula field will look like this.
Select {WIDTH}
Case 400 to 600 :
"G1"
Case 601 to 849:
"G2"
Case 850 to 1049:
"G3"
Default :
"Default text or error message text goes here"
You will likely need to adjust the integer values I'm used in the Case statements to evaluate the WIDTH field. The text that goes into the Default case is up to you. In fact, if it works logically with your needs, you could eliminate the Default case entirely, as it is not required. However, it is good practice to ensure the Switch statement always returns a value, even if that value is text to indicate that something unexpected occurred. This allows your users to easily recognize a bit of a data that may be out of range for the grouping of the report so the report can be modified or the data can be corrected, whichever is the most appropriate action.
The other 3 columns in your required output appear to just be counting the number of records within each group that have a diameter within a range. To get this output you can use Running Total Fields with a Type of summary of Count and then use the range of values in the Evaluate section. The Reset section will be set to On change of group evaluating the group created by the formula field above. You will want to put a sort order on the diameter field though.

What's the impact of TOAST on performance? (adding a hundred varchar columns)

Consider a table with the following data:
id bigint Auto Increment
name character varying(255) NULL
category character varying(255) NULL
english character varying(255) NULL
french character varying(255) NULL
pivot character varying(255) NULL
credits character varying(255) NULL
hash character varying(20) NULL
The english column contains data of the following size (in bytes): max 116, min 5, average 42, median: 40.
The number of rows in the table is around 30,000 and will hardly change.
The new 107 columns will be translations of the English.
Will adding 107 columns hurt performance?
The Postgres site says the maximum number of columns on a Postgres table is
250-1600 depending on column types
and
The maximum number of columns for a table is further reduced as the tuple being stored must fit in a single 8192-byte heap page
Will the data fall under that limit?
Size of the largest row
What is the actual storage size of the table's rows? pg_column_size is the
Number of bytes used to store a particular value (possibly compressed)
SELECT id, pg_column_size(t.*) FROM table as t ORDER BY pg_column_size DESC
-- Some stats derived from the query:
-- Min 87 bytes
-- Max 514 bytes
-- Average 216 bytes
-- Median: 209 bytes
But no compression is actually happening here, because:
When a row that is to be stored is "too wide" (the threshold for that is 2KB by default), the TOAST mechanism first attempts to compress any wide field values. If that isn't enough to get the row under 2KB, it breaks up the wide field values into chunks that get stored in the associated TOAST table. Each original field value is replaced by a small pointer that shows where to find this "out of line" data in the TOAST table. TOAST will attempt to squeeze the user-table row down to 2KB in this way, but as long as it can get below 8KB, that's good enough and the row can be stored successfully.
Compression would start to kick in once the table gets bigger and those new columns are added.
It's unclear to me what the compression ratio would be for such data?
I wonder how effective it'll be on lots of short multilingual sentences. Also, tried to find the exact name of the compression algorithm used by Postgres: the docs say "the LZ family of compression techniques", but which one – LZ77? LZ78? A twist on one of them?
The best way to find out how much compression will achieve here is certainly to try… once I've got the translations. But I'd rather get an idea of it beforehand, as I won't get all the data at once.
TOAST'ed?
If the size of the table goes beyond the page size limit, then Posgres will rely on TOAST not just to compress but also to split the data for "out-of-line" rows.
I understand this will increase fetch times for those rows that don't fit… But what's the impact of TOAST on performance? Is it negligible for such a use case?
Bottom-line
At the end of the day…
Is adding those 107 columns a good idea, or should I use a different approach?
If fine, how important is it to be fetching only those columns the user needs? (No user will need all of them.)
Or am I approaching this the wrong way, i.e. is it a case of premature optimization where I'd have been better off just adding the columns and only investigate later if faced with problems?
Using Postgres 9.6. Upgrading is an option if needed.
The best way to find out how much compression will achieve here is certainly to try… once I've got the translations. But I'd rather get an idea of it beforehand, as I won't get all the data at once.
I'd just copy the English version into each of the 107 columns. That should be good enough to get some useful findings. You might worry that the repetition would cause the compression to be idiosyncratic; but each value is compressed in isolation so won't "know" it is identical to some other value.
It's unclear to me what the compression ratio would be for such data?
Not very much. For example, the paragraph of yours I quoted first doesn't get any benefit from compression (when I copied it into 107 other columns). Short segments of ordinary text do not have enough repetition in them to be very compressible. Translating them to other languages is unlikely to change this.
If fine, how important is it to be fetching only those columns the user needs? (No user will need all of them.)
This question has a very clear answer. You should absolutely select only what you need. Assembling a row from 100+ toasted columns, just to throw most of them away, will slow you down.
I don't know if this falls under "premature optimization" so much as falling under poor design. In one way or another you will need some method of know which of the 108 versions you need. But what happens when you need to add the 108th translation, or you delete say the 93rd. So use this information to form a key to a translation table. Something like Translation_Test (for_ref_in bigint, language text, translation text). Then access the necessary text (including perhaps the English version) from that table.

CoDeSys Visualization Dropdown Menu Custom Values in modelTextList

Using CoDeSys, I have a drop down list for a visualization that uses an enumeration of values for the options in the list. The enumeration comes from a separate library and for my particular application I would like to use only a subset of the enumerated values in the drop-down. So in order to accomplish this, I have a text list containing only two values, 5 and 7.
This seems easy enough but when I go to run this particular drop-down I see the two values correctly but I also see numbers up to 12 for the missing IDs. 12 is weird since the enumeration has 22 enumerated values.
Is it possible to have only the two values show in the drop-down without making the ID's 0 and 1? I would really like to use the library enumeration.
It turns out there is a checkbox that must be checked called "Filter Missing Textentries" so that the drop-down list to only contains the values given in the text list. Once that box is checked it will remove the random numbered values.

Talend tFileInputDelimited row count

I want row count which is mentioned in the image, to be used in my expression. How can I access it?
Image is here:
As mentioned in the documentation, there are several available variables. It depends on the place where you want to use the variable if it is already filled. This is from the aforementioned page:
NB_LINE: the number of rows processed. This is an After variable and it returns an integer.
So in your case this would be
((Integer) globalMap.get("tFileInputDelimited_2_NB_LINE"))
Talend also offers those variables in component input fields if you press Ctrl + Space.

How do I increase drop down option character limit in Microsoft Word 2013?

I have a form on Word 13 with text input fields and Drop-down list content control. Each option in the drop down list cannot exceed a certain number of characters,is there a way to set the character limit? I have very long values for the options and they get truncated. Just to add, I simply used the content control properties to add the options.
I'm not using any coding or so yet.
Please advice on how can I increase drop down option character limit or set it to unlimited chars?
Thanks in advance.