OrientDB no way to screening keywords - orientdb

How to avoid incorrect query parsing?
In next example class name Contains parsed as keyword:
CREATE EDGE Contains FROM #14:0 TO #14:1
Return exception "Error: com.orientechnologies.orient.core.sql.OCommandSQLParsingException: Error on parsing command at position #0: Encountered " "Contains "" at line 1, column 13."
But I can create Edge class without any exception:
CREATE CLASS Contains EXTENDS E
In OrientDB v2.1.0 next query worked correctly:
CREATE EDGE `Contains` FROM #14:0 TO #14:1
But in 2.1.1 it's again broken (as I know in older versions screening by ` also not supported).
Solved
Bug was found in versions 2.1.1-2.1.2, now it fixed (https://github.com/orientechnologies/orientdb/issues/4980).

Related

Flutter Drift - parameters prohibited in CHECK constraints

I'm trying to use the drift library (renamed from moor) ,but it's unable to create tables because of this error:
SqliteException(1): parameters prohibited in CHECK constraints, SQL logic error (code 1)
CREATE TABLE IF NOT EXISTS enterprise (name TEXT NOT NULL CHECK(LENGTH(name) <= ?), ..other fields);
This is the table class causing the error:
class Enterprise extends Table {
TextColumn get name =>
text().check(name.length.isSmallerOrEqualValue(maxNameLength))();
// ...other fields
}
The error goes away if I remove the check. Could someone explain why the check isn't working ?
Turns out it's a bug with drift. This is the workaround as suggested by the author of drift, and will be fixed in a future release.
Replace
check(name.length.isSmallerOrEqualValue(maxNameLength))
With this:
check(name.length.isSmallerOrEqual(Constant(maxNameLength)))

Special characters when creating a table via jdbc

I am trying to export a table from cluster to oracle using jdbs:
df.write.mode('append').option("createTableColumnTypes", "S_ID INT, BREAK_$ FLOAT").jdbc(jdbc, ORACLE_TABLENAME, properties = properties)
I am getting the following error:
As I understand it, this is due to the fact that the $ symbol is a special character. How to solve this problem if you can not refuse such a column name? Tried to use 'BREAK_$', {BREAK_$} but it didn't work

postgres array range notation with jooq

in my postgres database i have an array of jsons (json[]) to keep track of errors in the form of json objects. if an update comes in i want to append the latest error to that array but only keep the latest 5 errors (all sql is simplified):
update dc
set
error_history =
array_append(dc.error_history[array_length(dc.error_history, 1) - 4:array_length(dc.error_history, 1)+1], cast('{"blah": 6}' as json))
where dc.id = 'f57520db-5b03-4586-8e77-284ed2dca6b1'
;
this works fine in native sql, i tried to replicate this in jooq as follows:
.set(dc.ERROR_HISTORY,
field("array_append(dc.error_history[array_length(dc.error_history, 1) - 4:array_length(dc.error_history, 1) + 1], cast('{0}' as json))", dc.ERROR_HISTORY.getType(), latestError)
);
but it seems the : causes the library to think that there is a bind parameter there. the generated sql is:
array_append(dc.error_history[array_length(dc.error_history, 1) - 4?(dc.error_history, 1) + 1], cast('{0}' as json)
and the error i get is
nested exception is org.postgresql.util.PSQLException: ERROR: syntax error at or near "$5"
which i totally agree with :D
is there some way to escape the : in the java code or is there a better way to do this?
edit:
i have also tried just to remove the first element upon updating using the arrayRemove function but that also didn't work because it doesn't work by index but by element and postgres doesn't know how to check json elements for equality.
omg, the answer was really simple :D
field("array_append(dc.error_history[array_length(dc.error_history, 1) - 4 : array_length(dc.error_history, 1) + 1], cast({0} as json))", dc.ERROR_HISTORY.getType(), latestError)
just add spaces around the colon and it works correctly. note that i made another mistake by putting single quotes around {0}, because the latestError object is already of type JSON.

Using pysnmp for Juniper OIDs (with octets)

The Juniper knowledge base says that you can hit jnxOperatingCPU.x.x.x.x to get the memory usage from the device, and the x.x.x.x are "the last 4 octets", in my case "9.1.0.0".
I don't seem to be able to get results like this using pysnmp's getCmd() method. I have the JUNIPER-MIB in place, but the script returns:
No symbol JUNIPER-MIB::jnxOperatingCPU.9.1.0.0 at < pysnmp.smi.builder.MibBuilder object at 0x198b810>
I have another SNMP monitoring tool in place that can reach this OID, so I know it's valid on this device. I can also use the full numeric OID to get the value, but I'd rather have the pretty name.
Might anyone have an example of using such an OID with pysnmp.hlapi?
From the error message it looks like you are using the ObjectIdentity class incorrectly (pasting code snippet would be helpful though).
According to the JUNIPER-MIB the jnxOperatingCPU object belongs to the jnxOperatingTable table which has these indices:
jnxOperatingEntry OBJECT-TYPE
SYNTAX JnxOperatingEntry
MAX-ACCESS not-accessible
STATUS current
DESCRIPTION
"An entry of operating status table."
INDEX { jnxOperatingContentsIndex,
jnxOperatingL1Index,
jnxOperatingL2Index,
jnxOperatingL3Index }
::= { jnxOperatingTable 1 }
All four indices are of type Integer32.
Therefore try this:
ObjectIdentity('JUNIPER-MIB', 'jnxOperatingCPU', 9, 1, 0, 0)
Here is the documentation on the ObjectIdentity class.

Problem with the deprecation of the postgresql XML2 module 'xml_is_well_formed' function

We need to make extensive use of the 'xml_is_well_formed' function provided by the XML2 module.
Yet the documentation says that the xml2 module will be deprecated since "XML syntax checking and XPath queries"
is covered by the XML-related functionality based on the SQL/XML standard in the core server from PostgreSQL 8.3 onwards.
However, the core function XMLPARSE does not provide equivalent functionality since when it detects an invalid XML document,
it throws an error rather than returning a truth value (which is what we need and currently have with the 'xml_is_well_formed' function).
For example:
select xml_is_well_formed('<br></br2>');
xml_is_well_formed
--------------------
f
(1 row)
select XMLPARSE( DOCUMENT '<br></br2>' );
ERROR: invalid XML document
DETAIL: Entity: line 1: parser error : expected '>'
<br></br2>
^
Entity: line 1: parser error : Extra content at the end of the document
<br></br2>
^
Is there some way to use the new, core XML functionality to simply return a truth value
in the way that we need?.
Thanks,
-- Mike Berrow
After asking about this on the pgsql-hackers e-mail list, I am happy to report that the guys there agreed that it was still needed and they have now moved this function to the core.
See:
http://web.archiveorange.com/archive/v/alpsnGpFlZa76Oz8DjLs
and
http://postgresql.1045698.n5.nabble.com/review-xml-is-well-formed-td2258322.html