The Juniper knowledge base says that you can hit jnxOperatingCPU.x.x.x.x to get the memory usage from the device, and the x.x.x.x are "the last 4 octets", in my case "9.1.0.0".
I don't seem to be able to get results like this using pysnmp's getCmd() method. I have the JUNIPER-MIB in place, but the script returns:
No symbol JUNIPER-MIB::jnxOperatingCPU.9.1.0.0 at < pysnmp.smi.builder.MibBuilder object at 0x198b810>
I have another SNMP monitoring tool in place that can reach this OID, so I know it's valid on this device. I can also use the full numeric OID to get the value, but I'd rather have the pretty name.
Might anyone have an example of using such an OID with pysnmp.hlapi?
From the error message it looks like you are using the ObjectIdentity class incorrectly (pasting code snippet would be helpful though).
According to the JUNIPER-MIB the jnxOperatingCPU object belongs to the jnxOperatingTable table which has these indices:
jnxOperatingEntry OBJECT-TYPE
SYNTAX JnxOperatingEntry
MAX-ACCESS not-accessible
STATUS current
DESCRIPTION
"An entry of operating status table."
INDEX { jnxOperatingContentsIndex,
jnxOperatingL1Index,
jnxOperatingL2Index,
jnxOperatingL3Index }
::= { jnxOperatingTable 1 }
All four indices are of type Integer32.
Therefore try this:
ObjectIdentity('JUNIPER-MIB', 'jnxOperatingCPU', 9, 1, 0, 0)
Here is the documentation on the ObjectIdentity class.
Related
Please does anyone know why this:
SELECT to_tsvector('an');
returns nothing but
SELECT to_tsvector('nn');
or
SELECT to_tsvector('n');
or
SELECT to_tsvector('aa');
do?
I am testing this on PostgreSQL 13 running on SUPABASE.
Thanks
Because "an" is a stop word in your current setup (probably English, the default).
From the documentation
The to_tsvector function internally calls a parser which breaks the document text into tokens and assigns a type to each token. For each token, a list of dictionaries (Section 12.6) is consulted, where the list can vary depending on the token type.
And (emphasis mine)...
Some words are recognized as stop words (Section 12.6.1), which causes them to be ignored since they occur too frequently to be useful in searching.
I got some account issues in the SCN so I make a attempt here.
We switched to Unicode and got some issues with that. INFTY_TAB = PS+2. This coding gets an error that "the offset + length is exceeding".
I found some hints but couldn't really figure out how to fix this. And even when I manage to fix those errors I got a new error called 'Iclude-Report %HR_P9002 not found'. The IT is still there so is there something else I can check?
Definition of PS:
DATA: BEGIN OF PS OCCURS 0.
*This indicates if a record was read with disabled authority check.
data: authc_disabled(1) type c.
DATA: TCLAS LIKE PSPAR-TCLAS.
INCLUDE STRUCTURE PRELP.
DATA: ACRCD LIKE SY-SUBRC.
DATA: END OF PS.
TCLAS is a char(1) field.
This is the part where the error pops up:
INFTY_TAB = PS+2.
Error: I had to translate so sorry for some mistakes that could appear.
Offset and Length (=2432) exceed the length of the character based beginning (=2430) of the structure.
Depends on the length of INFTY_TAB. You have to explicitly set length:
INFTY_TAB = PS+2(length).
Official information is here. The important point to note is that the inclusion of SY-SUBRC (which is an INT4 field) places a limit to the range of fields you can access using this (discouraged) method of access.
ASSIGN field+off TO is generally forbidden from a syntactical
point of view since any offset <> 0 would cause the range to be
exceeded.
Although the sentence above is related to ASSIGN command, it is also valid for this situation.
I am using Postman for Chrome to test some simply stuff with Sharepoints REST Api.
Well - I get an error right in the beginning while trying to retrieve certain Fields from a List - here are some pictures:
SharePoint List:
Calling for CPU
Calling for RAM - which fails:
(EXmsg: Field Property 'RAM' does not exist)
Anyone has a clue? I do not understand this at all.
Thanks in Advance!
P.s.
When calling:
/_api/web/lists/getbytitle('Anwendungen')/items
RAM will be displayed as a property of "zzfu" - dafuq?
This error occurs since $select query option accepts field internal name but not display name.
Having said that, you might need first to determine field internal name, for example:
/_api/web/lists/GetByTitle('Anwendungen')/fields?$select=InternalName&$filter=Title eq 'RAM'
Once the field internal is determined, you could query field value like this:
/_api/web/lists/GetByTitle('Anwendungen')/items?$select=<RAM internal name>
where <RAM internal name> is RAM field internal name.
I'm using a plugin and want to perform an action based on the records statuscode value. I've seen online that you can use entity.FormattedValues["statuscode"] to get values from option sets but when try it I get an error saying "The given key was not present in the dictionary".
I know this can happen when the plugin cant find the change for the field you're looking for, but i've already checked that this does exist using entity.Contains("statuscode") and it passes by that fine but still hits this error.
Can anyone help me figure out why its failing?
Thanks
I've not seen the entity.FormattedValues before.
I usually use the entity.Attributes, e.g. entity.Attributes["statuscode"].
MSDN
Edit
Crm wraps many of the values in objects which hold additional information, in this case statuscode uses the OptionSetValue, so to get the value you need to:
((OptionSetValue)entity.Attributes["statuscode"]).Value
This will return a number, as this is the underlying value in Crm.
If you open up the customisation options in Crm, you will usually (some system fields are locked down) be able to see the label and value for each option.
If you need the label, you could either do some hardcoding based on the information in Crm.
Or you could retrieve it from the metadata services as described here.
To avoid your error, you need to check the collection you wish to use (rather than the Attributes collection):
if (entity.FormattedValues.Contains("statuscode")){
var myStatusCode = entity.FormattedValues["statuscode"];
}
However although the SDK fails to confirm this, I suspect that FormattedValues are only ever present for numeric or currency attributes. (Part-speculation on my part though).
entity.FormattedValues work only for string display value.
For example you have an optionset with display names as 1, 2, 3,
The above statement do not recognize these values because those are integers. If You have seen the exact defintion of formatted values in the below link
http://msdn.microsoft.com/en-in/library/microsoft.xrm.sdk.formattedvaluecollection.aspx
you will find this statement is valid for only string display values. If you try to use this statement with Integer values it will throw key not found in dictionary exception.
So try to avoid this statement for retrieving integer display name optionset in your code.
Try this
string Title = (bool)entity.Attributes.Contains("title") ? entity.FormattedValues["title"].ToString() : "";
When you are talking about Option set, you have value and label. What this will give you is the label. '?' will make sure that the null value is never passed.
I am new to using intersytems cache and face an issue where I am querying data stored in cache, exposed by classes which do not seem to accurately represent the data in the underlying system. The data stored in the globals is almost always larger than what is defined in the object code.
As such I get errors like the one below very frequently.
Msg 7347, Level 16, State 1, Line 2
OLE DB provider 'MSDASQL' for linked server 'cache' returned data that does not match expected data length for column '[cache]..[namespace].[tablename].columname'. The (maximum) expected data length is 5, while the returned data length is 6.
Does anyone have any experience with implementing some type of quality process to ensure that the object definitions (sql mappings) are maintained in such away that they can accomodate the data which is being persisted in the globals?
Property columname As %String(MAXLEN = 5, TRUNCATE = 1) [ Required, SqlColumnNumber = 2, SqlFieldName = columname ];
In this particular example the system has the column defined with a max len of 5, however the data stored in the system is 6 characters long.
How can I proactively monitor and repair such situations.
/*
I did not create these object definitions in cache
*/
It's not completely clear what "monitor and repair" would mean for you, but:
How much control do you have over the database side? Cache runs code for a data-type on converting from a global to ODBC using the LogicalToODBC method of the data-type class. If you change the property types from %String to your own class, AppropriatelyNamedString, then you can override that method to automatically truncate. If that's what you want to do. It is possible to change all the %String property types programatically using the %Library.CompiledClass class.
It is also possible to run code within Cache to find records with properties that are above the (somewhat theoretical) maximum length. This obviously would require full table scans. It is even possible to expose that code as a stored procedure.
Again, I don't know what exactly you are trying to do, but those are some options. They probably do require getting deeper into the Cache side than you would prefer.
As far as preventing the bad data in the first place, there is no general answer. Cache allows programmers to directly write to the globals, bypassing any object or table definitions. If that is happening, the code doing so must be fixed directly.
Edit: Here is code that might work in detecting bad data. It might not work if you are doing cetain funny stuff, but it worked for me. It's kind of ugly because I didn't want to break it up into methods or tags. This is meant to run from a command prompt, so it would have to be modified for your purposes probably.
{
S ClassQuery=##CLASS(%ResultSet).%New("%Dictionary.ClassDefinition:SubclassOf")
I 'ClassQuery.Execute("%Library.Persistent") b q
While ClassQuery.Next(.sc) {
If $$$ISERR(sc) b Quit
S ClassName=ClassQuery.Data("Name")
I $E(ClassName)="%" continue
S OneClassQuery=##CLASS(%ResultSet).%New(ClassName_":Extent")
I '$IsObject(OneClassQuery) continue //may not exist
try {
I 'OneClassQuery.Execute() D OneClassQuery.Close() continue
}
catch
{
D OneClassQuery.Close()
continue
}
S PropertyQuery=##CLASS(%ResultSet).%New("%Dictionary.PropertyDefinition:Summary")
K Properties
s sc=PropertyQuery.Execute(ClassName) I 'sc D PropertyQuery.Close() continue
While PropertyQuery.Next()
{
s PropertyName=$G(PropertyQuery.Data("Name"))
S PropertyDefinition=""
S PropertyDefinition=##CLASS(%Dictionary.PropertyDefinition).%OpenId(ClassName_"||"_PropertyName)
I '$IsObject(PropertyDefinition) continue
I PropertyDefinition.Private continue
I PropertyDefinition.SqlFieldName=""
{
S Properties(PropertyName)=PropertyName
}
else
{
I PropertyName'="" S Properties(PropertyDefinition.SqlFieldName)=PropertyName
}
}
D PropertyQuery.Close()
I '$D(Properties) continue
While OneClassQuery.Next(.sc2) {
B:'sc2
S ID=OneClassQuery.Data("ID")
Set OneRowQuery=##class(%ResultSet).%New("%DynamicQuery:SQL")
S sc=OneRowQuery.Prepare("Select * FROM "_ClassName_" WHERE ID=?") continue:'sc
S sc=OneRowQuery.Execute(ID) continue:'sc
I 'OneRowQuery.Next() D OneRowQuery.Close() continue
S PropertyName=""
F S PropertyName=$O(Properties(PropertyName)) Q:PropertyName="" d
. S PropertyValue=$G(OneRowQuery.Data(PropertyName))
. I PropertyValue'="" D
.. S PropertyIsValid=$ZOBJClassMETHOD(ClassName,Properties(PropertyName)_"IsValid",PropertyValue)
.. I 'PropertyIsValid W !,ClassName,":",ID,":",PropertyName," has invalid value of "_PropertyValue
.. //I PropertyIsValid W !,ClassName,":",ID,":",PropertyName," has VALID value of "_PropertyValue
D OneRowQuery.Close()
}
D OneClassQuery.Close()
}
D ClassQuery.Close()
}
The simplest solution is to increase the MAXLEN parameter to 6 or larger. Caché only enforces MAXLEN and TRUNCATE when saving. Within other Caché code this is usually fine, but unfortunately ODBC clients tend to expect this to be enforced more strictly. The other option is to write your SQL like SELECT LEFT(columnname, 5)...
The simplest solution which I use for all Integration Services Packages, for example is to create a query that casts all nvarchar or char data to the correct length. In this way, my data never fails for truncation.
Optional:
First run a query like: SELECT Max(datalength(mycolumnName)) from cachenamespace.tablename.mycolumnName
Your new query : SELECT cast(mycolumnname as varchar(6) ) as mycolumnname,
convert(varchar(8000), memo_field) AS memo_field
from cachenamespace.tablename.mycolumnName
Your pain of getting the data will be lessened but not eliminated.
If you use any type of oledb provider, or if you use an OPENQUERY in SQL Server,
the casts must occur in the query sent to Intersystems CACHE db, not in the the outer query that retrieves data from the inner OPENQUERY.