SAP Unicode: Offset exceed - unicode

I got some account issues in the SCN so I make a attempt here.
We switched to Unicode and got some issues with that. INFTY_TAB = PS+2. This coding gets an error that "the offset + length is exceeding".
I found some hints but couldn't really figure out how to fix this. And even when I manage to fix those errors I got a new error called 'Iclude-Report %HR_P9002 not found'. The IT is still there so is there something else I can check?
Definition of PS:
DATA: BEGIN OF PS OCCURS 0.
*This indicates if a record was read with disabled authority check.
data: authc_disabled(1) type c.
DATA: TCLAS LIKE PSPAR-TCLAS.
INCLUDE STRUCTURE PRELP.
DATA: ACRCD LIKE SY-SUBRC.
DATA: END OF PS.
TCLAS is a char(1) field.
This is the part where the error pops up:
INFTY_TAB = PS+2.
Error: I had to translate so sorry for some mistakes that could appear.
Offset and Length (=2432) exceed the length of the character based beginning (=2430) of the structure.

Depends on the length of INFTY_TAB. You have to explicitly set length:
INFTY_TAB = PS+2(length).
Official information is here. The important point to note is that the inclusion of SY-SUBRC (which is an INT4 field) places a limit to the range of fields you can access using this (discouraged) method of access.
ASSIGN field+off TO is generally forbidden from a syntactical
point of view since any offset <> 0 would cause the range to be
exceeded.
Although the sentence above is related to ASSIGN command, it is also valid for this situation.

Related

Cimplicity Screen - one object/button that is dependent on hundreds of points

So I have created a huge screen that essentially just shows the robot status for every robot in this factory (individually)… At the very end of the project, they decided they want one object on the screen that blinks if any of the 300 robots fault. I am trying to think of a way to make this work. Maybe a global script of some kind? Problem is, I do not do much scripting in Cimplicity, so any help is appreciated.
All the points that are currently used on this screen (to indicate a fault) have very similar names… as in, the beginning is the same… so I was thinking of a script that could maybe recognize if a bit is high based on PART of it's string name characteristic. The end will change a little each time, but I am sure there is a way to only look for part of a string and negate the rest. If the end has to be hard coded, that's fine.
You can use a Python script in Cimplicity.
I will not go into detail on the use of python in Cimplicity, which is well described in the documentation indicated above.
Here's an example of what can be done... note that I don't have a way to test it and, of course, this will work if the name of your robots in the declaration follows the format Robot_1, Robot_2, Robot_3 ... Robot_10 ... Robot_300 and it also depends on the Name and the Type of the fault variable... as you didn't define it, I imagine it can be an integer, with ZERO indicating no error. But if you use something other than that, you can easily change it.
import cimplicity
(...)
OneRobotWithFault = False
# Here you get the values and check for fault
for i in range(0, 300):
pointName = f'MyFactory.Robot_{i}.FaultCode'
robotFaultCode = cimplicity.point_get(pointName)
if robotFaultCode > 0:
OneRobotWithFault = True
break
# Set the status to the variable "WeHaveRobotWithFault"
cimplicity.point_set("WeHaveRobotWithFault", OneRobotWithFault)

How to encode normalized(A,B) properly?

I am using clingo to solve a homework problem and stumbled upon something I can't explain:
normalized(0,0).
normalized(A,1) :-
A != 0.
normalized(10).
In my opinion, normalized should be 0 when the first parameter is 0 or 1 in every other case.
Running clingo on that, however, produces the following:
test.pl:2:1-3:12: error: unsafe variables in:
normalized(A,1):-[#inc_base];A!=0.
test.pl:2:12-13: note: 'A' is unsafe
Why is A unsafe here?
According to Programming with CLINGO
Some error messages say that the program
has “unsafe variables.” Such a message usually indicates that the head of one of
the rules includes a variable that does not occur in its body; stable models of such
programs may be infinite.
But in this example A is present in the body.
Will clingo produce an infinite set consisting of answers for all numbers here?
I tried adding number(_) around the first parameter and pattern matching on it to avoid this situation but with the same result:
normalized(number(0),0).
normalized(A,1) :-
A=number(B),
B != 0.
normalized(number(10)).
How would I write normalized properly?
With "variables occuring in the body" actually means in a positive literal in the body. I can recommend the official guide: https://github.com/potassco/guide/releases/
The second thing, ASP is not prolog. Your rules get grounded, i.e. each first order variable is replaced with its domain. In your case A has no domain.
What would be the expected outcome of your program ?
normalized(12351,1).
normalized(my_mom,1).
would all be valid replacements for A so you create an infinite program. This is why 'A' has to be bounded by a domain. For example:
dom(a). dom(b). dom(c). dom(100).
normalized(0,0).
normalized(A,1) :- dom(A).
would produce
normalize(0,0).
normalize(a,1).
normalize(b,1).
normalize(c,1).
normalize(100,1).
Also note that there is no such thing as number/1. ASP is a typefree language.
Also,
normalized(10).
is a different predicate with only one parameter, I do not know how this will fit in your program.
Maybe your are looking for something like this:
dom(1..100).
normalize(0,0).
normalize(X,1) :- dom(X).
foo(43).
bar(Y) :- normalize(X,Y), foo(X).

Using "is null" when retrieving prices doesn't work

I am trying to get the price items for performance block storage that are generic (not specific to a certain datacenter). I can see that these have the locationGroupId set to blank or null, but I can't seem to get the objectFilter to work with that, the query returns nothing. If I omit the locationGroupId filter I get a result that contain both location-specific and non-location specific prices.
GET /rest/v3.1/SoftLayer_Product_Package/759/getItemPrices.json?objectMask=mask[locationGroupId,id,categories,item]&objectFilter={"itemPrices":{"categories":{"categoryCode":{"operation":"performance_storage_space"}},"item":{"keyName":{"operation":"$=GBs"}},"locationGroupId":{"operation":"is null"}}}
I am guessing there is something wrong with the object filter, any ideas?
If I filter on locationGroupId 509 it works:
/rest/v3.1/SoftLayer_Product_Package/759/getItemPrices.json?objectMask=mask[locationGroupId,id,categories,item]&objectFilter={"itemPrices":{"categories":{"categoryCode":{"operation":"performance_storage_space"}},"item":{"keyName":{"operation":"$=GBs"}},"locationGroupId":{"operation":509}}}
The reason it the first query didn't work while the second did was that I used the command "curl -sg" to do the request. While that eliminates the need to escape the {}[] characters - it also turns off escaping other characters correctly in the URL - like the space in "is null". Changing that to "is%20null" solves the issue.
I am posting this as the answer as I find it likely for others to encounter this problem.

Get statuscode text in C#

I'm using a plugin and want to perform an action based on the records statuscode value. I've seen online that you can use entity.FormattedValues["statuscode"] to get values from option sets but when try it I get an error saying "The given key was not present in the dictionary".
I know this can happen when the plugin cant find the change for the field you're looking for, but i've already checked that this does exist using entity.Contains("statuscode") and it passes by that fine but still hits this error.
Can anyone help me figure out why its failing?
Thanks
I've not seen the entity.FormattedValues before.
I usually use the entity.Attributes, e.g. entity.Attributes["statuscode"].
MSDN
Edit
Crm wraps many of the values in objects which hold additional information, in this case statuscode uses the OptionSetValue, so to get the value you need to:
((OptionSetValue)entity.Attributes["statuscode"]).Value
This will return a number, as this is the underlying value in Crm.
If you open up the customisation options in Crm, you will usually (some system fields are locked down) be able to see the label and value for each option.
If you need the label, you could either do some hardcoding based on the information in Crm.
Or you could retrieve it from the metadata services as described here.
To avoid your error, you need to check the collection you wish to use (rather than the Attributes collection):
if (entity.FormattedValues.Contains("statuscode")){
var myStatusCode = entity.FormattedValues["statuscode"];
}
However although the SDK fails to confirm this, I suspect that FormattedValues are only ever present for numeric or currency attributes. (Part-speculation on my part though).
entity.FormattedValues work only for string display value.
For example you have an optionset with display names as 1, 2, 3,
The above statement do not recognize these values because those are integers. If You have seen the exact defintion of formatted values in the below link
http://msdn.microsoft.com/en-in/library/microsoft.xrm.sdk.formattedvaluecollection.aspx
you will find this statement is valid for only string display values. If you try to use this statement with Integer values it will throw key not found in dictionary exception.
So try to avoid this statement for retrieving integer display name optionset in your code.
Try this
string Title = (bool)entity.Attributes.Contains("title") ? entity.FormattedValues["title"].ToString() : "";
When you are talking about Option set, you have value and label. What this will give you is the label. '?' will make sure that the null value is never passed.

TermQuery not returning on a known search term, but WildcardQuery does

Am hoping someone with enough insight into the inner workings of Lucene might be able to point me in the right direction =)
I'll skip most of the surrounding irellevant code, and cut right to the chase. I have a Lucene index, to which I am adding the following field to the index (variables replaced by their literal values):
document.Add( new Field("Typenummer", "E5CEB501A244410EB1FFC4761F79E7B7",
Field.Store.YES , Field.Index.UN_TOKENIZED));
Later, when I search my index (using other types of queries), I am able to verify that this field does indeed appear in my index - like when looping through all Fields returned by Document.GetFields()
Field: Typenummer, Value: E5CEB501A244410EB1FFC4761F79E7B7
So far so good :-)
Now the real problem is - why can I not use a TermQuery to search against this value and actually get a result.
This code produces 0 hits:
// Returns 0 hits
bq.Add( new TermQuery( new Term( "Typenummer",
"E5CEB501A244410EB1FFC4761F79E7B7" ) ), BooleanClause.Occur.MUST );
But if I switch this to a WildcardQuery (with no wildcards), I get the 1 hit I expect.
// returns the 1 hit I expect
bq.Add( new WildcardQuery( new Term( "Typenummer",
"E5CEB501A244410EB1FFC4761F79E7B7" ) ), BooleanClause.Occur.MUST );
I've checked field lengths, I've checked that I am using the same Analyzer and so on and I am still on square 1 as to why this is.
Can anyone point me in a direction I should be looking?
I finally figured out what was going on. I'm expanding the tags for this question as it, much to my surprise, actually turned out to be an issue with the CMS this particular problem exists in. In summary, the problem came down to this:
The field is stored UN_TOKENIZED, meaning Lucene will store it excactly "as-is"
The BooleanQuery I pasted snippets from gets sent to the Sitecore SearchManager inside a PreparedQuery wrapper
The behaviour I expected from this was, that my query (having already been prepared) would go - unaltered - to the Lucene API
Turns out I was wrong. It passes through a RewriteQuery method that copies my entire set of nested queries as-is, with one exception - all the Term arguments are passed through a LowercaseStrategy()
As I indexed an UPPERCASE Term (UN_TOKENIZED), and Sitecore changes my PreparedQuery to lowercase - 0 results are returned
Am not going to start an argument of whether this is "by design" or "by design flaw" implementation of the Lucene Wrapper API - I'll just note that rewriting my query when using the PreparedQuery overload is... to me... unexpected ;-)
Further teachings from this; storing the field as TOKENIZED will eliminate this problem too, as the StandardAnalyzer by default will lowercase all tokens.