Unable to rename existing edge relationships; Using UNSAFE fails - orientdb

I cannot seem to rename an Edge. Is this possible with OrientDB?
I am running OrientDB in distributed mode on 3 servers.
Each server is configured as
OS: CentOS Linux release 7.5.1804 (Core)
OrientDB: 3.0.9
In the Studio web interface, To rename an Edge, I click the "rename" button adjacent to the Edge I want to rename.
I get this message at the bottom of the screen:
com.orientechnologies.orient.core.exception.OCommandExecutionException: Cannot alter class 'Transfer2' because is an Edge class and could break vertices. Use UNSAFE if you want to force it DB name="marksluser"
In the console, I execute
orientdb {db=marksluser}> ALTER CLASS Transfer2 NAME Transfer UNSAFE;
Error: com.orientechnologies.orient.core.exception.OCommandExecutionException: Invalid class name: ALTER CLASS Transfer2 NAME Transfer UNSAFE
DB name="marksluser"
DB name="marksluser"
Error: com.orientechnologies.orient.core.exception.OCommandExecutionException: Cannot alter class 'Transfer2' because is an Edge class and could break vertices. Use UNSAFE if you want to force it
DB name="marksluser"
I also tried to rename the in and out edges to the new name as described in Renaming existing edge relationships? however it still didn't work.
Am I doing something wrong?

I tried this again with a clean database running 3.0.9 and the issue remains. I tried it with the development version 3.1.0-M2 and using "UNSAFE" works!
Looks like I wil be working on the 3.1.0-M2 release.
So if you want to rename an edge from "Edge1" to "Edge2" and all your vertices are subclasses of V, execute:
ALTER CLASS Edge1 NAME Edge2 UNSAFE;
UPDATE V SET out_Edge2 = out_Edge1 where out_Edge1 is not null;
UPDATE V SET in_Edge2 = in_Edge1 where in_Edge1 is not null;
UPDATE V REMOVE out_Edge1 where out_Edge1 is not null;
UPDATE V REMOVE in_Edge1 where in_Edge1 is not null;

Related

What causes MARSHALLINGERROR when creating a znode?

I am doing a simple createAsync() with my ZooKeeperNetEx nuget package and it is throwing an exception which is triggered by a MARSHALLINGERROR.
Here's is the two-line summary (between these lines, the connection successfully confirmed to Zookeeper):
var Zoo = new ZooKeeper("localhost:50002", 5000, new ClusterWatcher());
. . .
var parentNode = Zoo.createAsync("/election", null, null, CreateMode.PERSISTENT).Result
I do not get it. ClusterWatcher is my own class derived from Watcher, of course. Yes, I am writing this in C# but this such a simple matter, I would not think it mattered. The host machine is running Windows 10 Pro, if that matters.
This exception can be triggered by not specifying the ACL mode (you seem to pass null). In Java you can pass the predefined list ZooDefs.Ids.OPEN_ACL_UNSAFE (for example, or one of the others in that class) for the ACL mode; for C# there will probably be a similarly named constant.
In the Java client library this is a convenience constant that is defined as:
/**
* This is a completely open ACL .
*/
public final ArrayList<ACL> OPEN_ACL_UNSAFE = new ArrayList<ACL>(
Collections.singletonList(new ACL(Perms.ALL, ANYONE_ID_UNSAFE))
);

what could I be missing in my typo3 extension to cause a table does not exist error?

I am getting this error after adding to an extension a class from another extension:
Uncaught TYPO3 Exception
#1247602160: Table 'deva.tx_bingoprizes_domain_model_hall' doesn't exist: SELECT tx_bingoprizes_domain_model_hall.* FROM tx_bingoprizes_domain_model_hall WHERE tx_bingoprizes_domain_model_hall.uid IN ('0') LIMIT 1
Tx_Extbase_Persistence_Storage_Exception_SqlError thrown in file
/home/typo3_src/typo3_src-4.5.32/typo3/sysext/extbase/Classes/Persistence/Storage/Typo3DbBackend.php in line 1008.
The class added is tx_bingoprizes_domain_model_hall which should be reading from the table tx_bpscore_domain_model_hall as I added to the setup file:
config.tx_extbase.persistence.classes {
Tx_Bingoprizes_Domain_Model_Hall {
mapping {
tableName = tx_bpscore_domain_model_hall
}
}
}
as I did for other extension which also reuses this class and which works properly ( I use it as my model for how to do this and as near as I can tell did everything the same way ). Why is typo3 still trying to use table tx_bingoprizes_domain_model_hall? where else do I need to specify the other table? I tried restarting the server, clearing caches, reinstalling the extension but still get the error.
I am using the latest 4.5 typo3.
Thanks
to reiterate my comment as the answer...
OK, I got it. Once again I had forgotten to INCLUDE the necessary item (in this case bingoprizes) to the page's template. So the error was not in my extension but in the typo3 config for the page. I hate that, forget it all the time, it is counter-intuitive to me as I find it natural to assume the setup.txt stuff is auto included on any page that uses my extension.

Data models generated by Sqlautocode: 'RelationshipProperty' object has no attribute 'c'

Using PGModeler, we created a schema and then exported out some appropriate SQL code. The SQL commands were able to populate the appropriate tables and rows in our Postgres database.
From here, we wanted to create declarative Sqlalchemy models, and so went with Sqlautocode. We ran it at the terminal:
sqlautocode postgresql+psycopg2://username:password#host/db_name -o models.py -d
And it generated our tables and corresponding models as expected. So far, zero errors.
Then, when going to ipython, I imported everything from models.py and simply tried creating an instance of a class defined there. Suddenly, I get this error:
AttributeError: 'RelationshipProperty' object has no attribute 'c'
This one left me confused for a while. The other SO threads that discuss this had solutions nowhere near my issue (often related to a specific framework or syntax not being used by sqlautocode).
After finding the reason, I decided to document the issue at hand. See below.
Our problem was simply due to bad naming given to our variables when sqlautocode ran. Specifically, the bad naming happened with any model that had a foreign key to itself.
Here's an example:
#Note that all \"relationship\"s below are now \"relation\"
#it is labeled relationship here because I was playing around...
service_catalog = Table(u'service_catalog', metadata,
Column(u'id', BIGINT(), nullable=False),
Column(u'uuid', UUID(), primary_key=True, nullable=False),
Column(u'organization_id', INTEGER(), ForeignKey('organization.id')),
Column(u'type', TEXT()),
Column(u'name', TEXT()),
Column(u'parent_service_id', BIGINT(), ForeignKey('service_catalog.id')),
)
#Later on...
class ServiceCatalog(DeclarativeBase):
__table__ = service_catalog
#relation definitions
organization = relationship('Organization', primaryjoin='ServiceCatalog.organization_id==Organization.id')
activities = relationship('Activity', primaryjoin='ServiceCatalog.id==ActivityService.service_id', secondary=activity_service, secondaryjoin='ActivityService.activity_id==Activity.id')
service_catalog = relationship('ServiceCatalog', primaryjoin='ServiceCatalog.parent_service_id==ServiceCatalog.id')
organizations = relationship('Organization', primaryjoin='ServiceCatalog.id==ServiceCatalog.parent_service_id', secondary=service_catalog, secondaryjoin='ServiceCatalog.organization_id==Organization.id')
In ServiceCatalog.organizations, it is looking to have the secondary table be service_catalog, but that variable was just overwritten locally. Switching the order of the two will fix this issue.

How can I override SQL scripts generated by MigratorScriptingDecorator

Using Entity Framework 4.3.1 Code first, and Data Migrations.
I have written a utility to automatically generate the Migration scripts for a target database, using the MigratorScriptingDecorator.
However, sometimes when re-generating the target database from scratch, the generated script is invalid, in that it declares a variable with the same name twice.
The variable name is #var0.
This appears to happen when there are multiple migrations being applied, and when at least two result in a default constraint being dropped.
The problem occurs both when generating the script form code, and when using the Package Manager console command:
Update-Database -Script
Here are the offending snippets form the generated script:
DECLARE #var0 nvarchar(128)
SELECT #var0 = name
FROM sys.default_constraints
WHERE parent_object_id = object_id(N'SomeTableName')
and
DECLARE #var0 nvarchar(128)
SELECT #var0 = name
FROM sys.default_constraints
WHERE parent_object_id = object_id(N'SomeOtherTableName')
I would like to be able to override the point where it generates the SQL for each migration, and then add a "GO" statement so that each migration is in a separate batch, which would solve the problem.
Anyone have any ideas how to do this, or if I'm barking up the wrong tree then maybe you could suggest a better approach?
So with extensive use of ILSpy and some pointers in the answer to this question I found a way.
Details below fo those interested.
Problem
The SqlServerMigrationSqlGenerator is the class ultimately responsible for creating the SQL statements that get executed against the target database or scripted out when using the -Script switch in the Package Manager console or when using the MigratorScriptingDecorator.
Workings
Examining the Genearate method in the SqlServerMigrationSqlGenerator which is responsible for a DROP COLUMN, it looks like this:
protected virtual void Generate(DropColumnOperation dropColumnOperation)
{
RuntimeFailureMethods
.Requires(dropColumnOperation != null, null, "dropColumnOperation != null");
using (IndentedTextWriter indentedTextWriter =
SqlServerMigrationSqlGenerator.Writer())
{
string value = "#var" + this._variableCounter++;
indentedTextWriter.Write("DECLARE ");
indentedTextWriter.Write(value);
indentedTextWriter.WriteLine(" nvarchar(128)");
indentedTextWriter.Write("SELECT ");
indentedTextWriter.Write(value);
indentedTextWriter.WriteLine(" = name");
indentedTextWriter.WriteLine("FROM sys.default_constraints");
indentedTextWriter.Write("WHERE parent_object_id = object_id(N'");
indentedTextWriter.Write(dropColumnOperation.Table);
indentedTextWriter.WriteLine("')");
indentedTextWriter.Write("AND col_name(parent_object_id,
parent_column_id) = '");
indentedTextWriter.Write(dropColumnOperation.Name);
indentedTextWriter.WriteLine("';");
indentedTextWriter.Write("IF ");
indentedTextWriter.Write(value);
indentedTextWriter.WriteLine(" IS NOT NULL");
indentedTextWriter.Indent++;
indentedTextWriter.Write("EXECUTE('ALTER TABLE ");
indentedTextWriter.Write(this.Name(dropColumnOperation.Table));
indentedTextWriter.Write(" DROP CONSTRAINT ' + ");
indentedTextWriter.Write(value);
indentedTextWriter.WriteLine(")");
indentedTextWriter.Indent--;
indentedTextWriter.Write("ALTER TABLE ");
indentedTextWriter.Write(this.Name(dropColumnOperation.Table));
indentedTextWriter.Write(" DROP COLUMN ");
indentedTextWriter.Write(this.Quote(dropColumnOperation.Name));
this.Statement(indentedTextWriter);
}
}
You can see it keeps track of the variables names used, but this only appears to keep track within a batch, i.e. a single migration. So if a migratin contains more than one DROP COLUM the above works fine, but if there are two migrations which result in a DROP COLUMN being generated then the _variableCounter variable is reset.
No problems are experienced when not generating a script, as each statement is executed immediately against the database (I checked using SQL Profiler).
If you generate a SQL script and want to run it as-is though you have a problem.
Solution
I created a new BatchSqlServerMigrationSqlGenerator inheriting from SqlServerMigrationSqlGenerator as follows (note you need using System.Data.Entity.Migrations.Sql;):
public class BatchSqlServerMigrationSqlGenerator : SqlServerMigrationSqlGenerator
{
protected override void Generate
(System.Data.Entity.Migrations.Model.DropColumnOperation dropColumnOperation)
{
base.Generate(dropColumnOperation);
Statement("GO");
}
}
Now to force the migrations to use your custom generator you have two options:
If you want it to be integrated into the Package Manager console, add the below line to your Configuration class:
SetSqlGenerator("System.Data.SqlClient",
new BatchSqlServerMigrationSqlGenerator());
If you're generating the script from code (like I was), add a similar line of code to where you have your Configuration assembly in code:
migrationsConfiguration.SetSqlGenerator(DataProviderInvariantName,
new BatchSqlServerMigrationSqlGenerator());

Accessing cache.dat through ODBC

Ok, so I am trying to extract the information from a cache.dat database sent from another business. I am trying to get at the data using the ODBC. I am able to see the globals from the samples namespace when trying to export to Access, but I can't get the data from this new database to show up.
I've tried to tackle this problem two ways. First, I simply shut down Cache, replaced the
existing database in InterSystems\TryCache\mgr\samples and restart cache. Once I restart I can see all the globals in the Management Portal from the new database. If I test the ODBC connection from the Windows ODBC administrator it connects. However, when I try to pull them into an access database using ODBC there are no tables showing up to import.
I've also tried to add the database to my Cache but it gave me the error:
ERROR #5805: ID key not unique for extent 'Config.Databases'
I tried to fool around with the values in there but to no avail. This is my first time messing with anything like this and any, ANY help would be awesome.
If you access the Management Portal do you see any table definitions defined for your namespace. If not, the application was written in CacheObjectScript with no Classes created to provide Object/SQL access. If this is the case then it could be a fair amount of work to create the classes that describe the data(global structures.)
Matt,
Did the business that provided the CACHE.DAT file indicate that you should have ODBC access to the data?
Did they provide some document describing the data/globals? If they provided a document that describes the globals you could create the classes that map the data. Depending on what you want to do this could either be a resource intensive process or not.
If you want to directly access globals you can create a stored procedure that will do so. You should consider the security implications before you do this - it will expose all data in the global to anyone with ODBC access.
Here is an example of a stored procedure that returns the values of up to 9 global subscripts, plus the value at that node. You can modify it pretty easily if you need to.
Query OneGlobal(GlobalName As %String) As %Query(ROWSPEC = "NodeValue:%String,Sub1:%String,Sub2:%String,Sub3:%String,Sub4:%String,Sub5:%String,Sub6:%String,Sub7:%String,Sub8:%String,Sub9:%String") [SqlProc]
{
}
ClassMethod OneGlobalExecute(ByRef qHandle As %Binary, GlobalName As %String) As %Status
{
S qHandle="^"_GlobalName
Quit $$$OK
}
ClassMethod OneGlobalClose(ByRef qHandle As %Binary) As %Status [ PlaceAfter = OneGlobalExecute ]
{
Quit $$$OK
}
ClassMethod OneGlobalFetch(ByRef qHandle As %Binary, ByRef Row As %List, ByRef AtEnd As %Integer = 0) As %Status [ PlaceAfter = OneGlobalExecute ]
{
S Q=qHandle
S Q=$Q(#Q) b
I Q="" S Row="",AtEnd=1 Q $$$OK
S Depth=$QL(Q)
S $LI(Row,1)=$G(#Q)
F I=1:1:Depth S $LI(Row,I+1)=$QS(Q,I)
F I=Depth+1:1:9 S $LI(Row,I+1)=""
S AtEnd=0
S qHandle=Q
Quit $$$OK
}
I don't have code for you to get this from access, but for reference, to access this from python you might use (with pyodbc):
import pyodbc
import win32com.client
import urllib2
class CacheOdbcClient:
connectionString="DSN=MYCACHEDSN"
def __init__(self):
pass
def getGlobalAsOverlyLargeList(self):
connection=pyodbc.connect(self.connectionString)
cursor=connection.cursor()
cursor.execute("call MyPackageName.MyClassName_OneGlobal ?","MYGLOBAL")
list=[]
for row in cursor :
list.append((row.NodeValue,row.Sub1,row.Sub2,row.Sub3,row.Sub4,row.Sub5,row.Sub6,row.Sub7,row.Sub8,row.Sub9))
return list