Data models generated by Sqlautocode: 'RelationshipProperty' object has no attribute 'c' - postgresql

Using PGModeler, we created a schema and then exported out some appropriate SQL code. The SQL commands were able to populate the appropriate tables and rows in our Postgres database.
From here, we wanted to create declarative Sqlalchemy models, and so went with Sqlautocode. We ran it at the terminal:
sqlautocode postgresql+psycopg2://username:password#host/db_name -o models.py -d
And it generated our tables and corresponding models as expected. So far, zero errors.
Then, when going to ipython, I imported everything from models.py and simply tried creating an instance of a class defined there. Suddenly, I get this error:
AttributeError: 'RelationshipProperty' object has no attribute 'c'
This one left me confused for a while. The other SO threads that discuss this had solutions nowhere near my issue (often related to a specific framework or syntax not being used by sqlautocode).
After finding the reason, I decided to document the issue at hand. See below.

Our problem was simply due to bad naming given to our variables when sqlautocode ran. Specifically, the bad naming happened with any model that had a foreign key to itself.
Here's an example:
#Note that all \"relationship\"s below are now \"relation\"
#it is labeled relationship here because I was playing around...
service_catalog = Table(u'service_catalog', metadata,
Column(u'id', BIGINT(), nullable=False),
Column(u'uuid', UUID(), primary_key=True, nullable=False),
Column(u'organization_id', INTEGER(), ForeignKey('organization.id')),
Column(u'type', TEXT()),
Column(u'name', TEXT()),
Column(u'parent_service_id', BIGINT(), ForeignKey('service_catalog.id')),
)
#Later on...
class ServiceCatalog(DeclarativeBase):
__table__ = service_catalog
#relation definitions
organization = relationship('Organization', primaryjoin='ServiceCatalog.organization_id==Organization.id')
activities = relationship('Activity', primaryjoin='ServiceCatalog.id==ActivityService.service_id', secondary=activity_service, secondaryjoin='ActivityService.activity_id==Activity.id')
service_catalog = relationship('ServiceCatalog', primaryjoin='ServiceCatalog.parent_service_id==ServiceCatalog.id')
organizations = relationship('Organization', primaryjoin='ServiceCatalog.id==ServiceCatalog.parent_service_id', secondary=service_catalog, secondaryjoin='ServiceCatalog.organization_id==Organization.id')
In ServiceCatalog.organizations, it is looking to have the secondary table be service_catalog, but that variable was just overwritten locally. Switching the order of the two will fix this issue.

Related

Access related model fields from ModelAdmin actions for exporting to excel

I am desperately waiting for someone attention to get my question answered.... please help..
ModelAdmin model has to export to Excel action method.
I need to access related model fields in action method. That means I can not pass any arguments therefore I tried relatedmodel_set but ModelAdmin action method shows memory location and fails when I try to access values through attributes:
<django.db.models.fields.related_descriptors.create_reverse_many_to_one_manager..RelatedManager object at 0x7f8eea904ac0>
model.py
class EnrolStudent(models.Model):
def get_trn_activity(self):
return self.studenttraininactivities_set
class StudentTraininActivities(models.Model):
trainin_activities = models.ForeignKey(EnrolStudent,
on_delete=CASCADE, null=True )
<other fields...>
admin.py
#admin.register(EnrolStudent)
class EnrolAdmin(admin.ModelAdmin):
form = CityInlineForm
inlines = [CohortTraininActivitiesInline]
...
actions = [export_as_txt_action_0120("File NAT00120 data Export"
, fields=['information_no', 'get_trn_activity',
'student_enrol__student_code'])]
I need to access related model fields to export to excel.
I can not pass parameter to get_trn_activity as you have noticed.
Therefore selected rows only data from Django admin change_list page will only need bit of work using its queryset in actions method used in separate actions.py file that I can do!
Please help me with this issue. I am new to Python / Django.
I also tried property decorator in related model then access in a method in main model then call it inside action but same problem with memory address not the direct value and then how to get data of memory location here .... I don't know.
If I can access the related fields then I can do it no issue.
Another question:
I had same situation with model/related model before, but they were connected through OneToOneField relationship and I was able to use dundor to access related model fields but in this case of ForiegnKey relationship I can not see related model in queryset.
In other case this is what I can do easily; here cohortdetails is related model and when I debug I saw it was listed in queryset that was great.
actions = [export_as_txt_action_0080("File NAT00080 txt Export",
fields=['rto_student_code', 'first_name', 'family_name'
,'cohortdetails__highest_school__highestschool_levelcode',
'cohortdetails__cohort_gender'
, 'cohortdetails__student_dob' ])]

Django migration would delete important table

SHORT VERSION: Django's migration subsystem seems to want to drop and re-create my table, rather than just adding a column. How can I fix that?
LONG VERSION:
I'd like to add a field to one of my Django 3.0 models. No biggie, right?
add field to the class definition
manage.py makemigrations
manage.py migrate
So there's a strange issue....when I run makemigrations, I see this in the output:
Migrations for 'api':
api/migrations/0006_auto_20200814_0953.py
- Delete model APISearch
[snip]
- Create model APISearch
[snip]
And sure enough, there are instructions in the resulting migrations to delete and then create an APISearch model.
That would be very bad...APISearch is a real table in my database, containing important data.
I think the issue has to do with the fact that a long time ago, APISearch was a proxy class (it has long since been changed to a concrete class). I can't figure out how Django is determining proxy-ness, so that I can correct it.
I think I found the solution.
Edit the migration that originally created the class:
operations = [
migrations.CreateModel(
name='APISearch',
fields=[
],
options={
'proxy': True, # explicitly set this to False
'indexes': [],
},
bases=('foo.search',),
),
]
Create an empty migration, and use the AddField method to do what I want
operations = [
migrations.AddField(
model_name='apisearch',
name='my_new_field',
field=models.NullBooleanField(blank=True, null=True),
),
]

How to get paths for relocatable schemas in Gio.Settings?

In Gio.Settings I can list relocatable schemas using
Gio.Settings.list_relocatable_schemas()
and I can use
Gio.Settings.new_with_path(schema_id, path)
to get a Gio.Settings instance. But how can I get all value for path that are currently used for a given schema_id?
Normally, a schema has as fixed path that determines where the
settings are stored in the conceptual global tree of settings.
However, schemas can also be ‘relocatable’, i.e. not equipped with a
fixed path. This is useful e.g. when the schema describes an
‘account’, and you want to be able to store a arbitrary number of
accounts.
Isn't the new_with_path just for that? You have to store the schemas somewhere associated with accounts, but that is not the responsibility of the Settings system. I think new_with_path is for the case where your schemas depend on accounts.
I think you can find more information with GSettingsSchemas - this is an example in the Description for a case where the Schema is part of a plugin.
Unfortunately you cannot do it from Gio.Settings.
I see two options here:
Keep separate gsetting to store paths of relocatable schemas
Utilize dconf API, which is a low-level configuration system. Since there is no Python binding (guessing it's Python question) I suggest using ctypes for binding with C.
If you know the root path of your relocatable schemas you can use below snippet list them.
import ctypes
from ctypes import Structure, POINTER, byref, c_char_p, c_int, util
from typing import List
class DconfClient:
def __init__(self):
self.__dconf_client = _DCONF_LIB.dconf_client_new()
def list(self, directory: str) -> List[str]:
length_c = c_int()
directory_p = c_char_p(directory.encode())
result_list_c = _DCONF_LIB.dconf_client_list(self.__dconf_client, directory_p, byref(length_c))
result_list = self.__decode_list(result_list_c, length_c.value)
return result_list
def __decode_list(self, list_to_decode_c, length):
new_list = []
for i in range(length):
# convert to str and remove slash at the end
decoded_str = list_to_decode_c[i].decode().rstrip("/")
new_list.append(decoded_str)
return new_list
class _DConfClient(Structure):
_fields_ = []
_DCONF_LIB = ctypes.CDLL(util.find_library("dconf"))
_DCONF_LIB.dconf_client_new.argtypes = []
_DCONF_LIB.dconf_client_new.restype = POINTER(_DConfClient)
_DCONF_LIB.dconf_client_new.argtypes = []
_DCONF_LIB.dconf_client_list.argtypes = [POINTER(_DConfClient), c_char_p, POINTER(c_int)]
_DCONF_LIB.dconf_client_list.restype = POINTER(c_char_p)
You can't, at least not for an arbitrary schema, and this is by definition of what a relocatable schema is: a schema that can have multiple instances, stored in multiple arbitrary paths.
Since a relocatable schema instance can be stored basically anywhere inside DConf, gsettings has no way to list their paths, it does not keep track of instances. And dconf can't help you either, as it has no notion of schemas at all, it only knows about paths and keys. It can list the subpaths of a given path, but that's about it.
It's up for the application, when creating multiple instances of a given relocatable schema, to store each instance in a sensible, easily discoverable path, such as a subpath of the (non-relocatable) application schema. Or store the instance paths (or suffixes) as a list key in such schema.
Or both, like Gnome Terminal does with its profiles:
org.gnome.Terminal.ProfilesList is a non-relocatable, regular schema, stored at DConf path /org/gnome/terminal/legacy/profiles:/
That schema has 2 keys, a default string with a single UUID, and a list list of strings containing UUIDs.
Each profile is an instance of the relocatable schema org.gnome.Terminal.Legacy.Profile, and stored at, you guess... /org/gnome/terminal/legacy/profiles:/:<UUID>/!
This way a client can access all instances using either gsettings, reading list and building the paths from the UUIDs, or from dconf, by directly listing the subpaths of /org/gnome/terminal/legacy/profiles:/.
And, of course, for non-relocatable schemas you can always get their paths with:
gsettings list-schemas --print-paths

Accessing cache.dat through ODBC

Ok, so I am trying to extract the information from a cache.dat database sent from another business. I am trying to get at the data using the ODBC. I am able to see the globals from the samples namespace when trying to export to Access, but I can't get the data from this new database to show up.
I've tried to tackle this problem two ways. First, I simply shut down Cache, replaced the
existing database in InterSystems\TryCache\mgr\samples and restart cache. Once I restart I can see all the globals in the Management Portal from the new database. If I test the ODBC connection from the Windows ODBC administrator it connects. However, when I try to pull them into an access database using ODBC there are no tables showing up to import.
I've also tried to add the database to my Cache but it gave me the error:
ERROR #5805: ID key not unique for extent 'Config.Databases'
I tried to fool around with the values in there but to no avail. This is my first time messing with anything like this and any, ANY help would be awesome.
If you access the Management Portal do you see any table definitions defined for your namespace. If not, the application was written in CacheObjectScript with no Classes created to provide Object/SQL access. If this is the case then it could be a fair amount of work to create the classes that describe the data(global structures.)
Matt,
Did the business that provided the CACHE.DAT file indicate that you should have ODBC access to the data?
Did they provide some document describing the data/globals? If they provided a document that describes the globals you could create the classes that map the data. Depending on what you want to do this could either be a resource intensive process or not.
If you want to directly access globals you can create a stored procedure that will do so. You should consider the security implications before you do this - it will expose all data in the global to anyone with ODBC access.
Here is an example of a stored procedure that returns the values of up to 9 global subscripts, plus the value at that node. You can modify it pretty easily if you need to.
Query OneGlobal(GlobalName As %String) As %Query(ROWSPEC = "NodeValue:%String,Sub1:%String,Sub2:%String,Sub3:%String,Sub4:%String,Sub5:%String,Sub6:%String,Sub7:%String,Sub8:%String,Sub9:%String") [SqlProc]
{
}
ClassMethod OneGlobalExecute(ByRef qHandle As %Binary, GlobalName As %String) As %Status
{
S qHandle="^"_GlobalName
Quit $$$OK
}
ClassMethod OneGlobalClose(ByRef qHandle As %Binary) As %Status [ PlaceAfter = OneGlobalExecute ]
{
Quit $$$OK
}
ClassMethod OneGlobalFetch(ByRef qHandle As %Binary, ByRef Row As %List, ByRef AtEnd As %Integer = 0) As %Status [ PlaceAfter = OneGlobalExecute ]
{
S Q=qHandle
S Q=$Q(#Q) b
I Q="" S Row="",AtEnd=1 Q $$$OK
S Depth=$QL(Q)
S $LI(Row,1)=$G(#Q)
F I=1:1:Depth S $LI(Row,I+1)=$QS(Q,I)
F I=Depth+1:1:9 S $LI(Row,I+1)=""
S AtEnd=0
S qHandle=Q
Quit $$$OK
}
I don't have code for you to get this from access, but for reference, to access this from python you might use (with pyodbc):
import pyodbc
import win32com.client
import urllib2
class CacheOdbcClient:
connectionString="DSN=MYCACHEDSN"
def __init__(self):
pass
def getGlobalAsOverlyLargeList(self):
connection=pyodbc.connect(self.connectionString)
cursor=connection.cursor()
cursor.execute("call MyPackageName.MyClassName_OneGlobal ?","MYGLOBAL")
list=[]
for row in cursor :
list.append((row.NodeValue,row.Sub1,row.Sub2,row.Sub3,row.Sub4,row.Sub5,row.Sub6,row.Sub7,row.Sub8,row.Sub9))
return list

How do I use add_to in Class::DBI?

I'm trying to use Class::DBI with a simple one parent -> may chidren relationships:
Data::Company->table('Companies');
Data::Company->columns(All => qw/CompanyId Name Url/);
Data::Company->has_many(offers => 'Data::Offer'=>'CompanyId'); # =>'CompanyId'
and
Data::Offer->table('Offers');
Data::Offer->columns(All => qw/OfferId CompanyId MonthlyPrice/);
Data::Offer->has_a(company => 'Data::Company'=>'CompanyId');
I try to add a new record:
my $company = Data::Company->insert({ Name => 'Test', Url => 'http://url' });
my $offer = $company->add_to_offers({ MonthlyPrice => 100 });
But I get:
Can't locate object method "add_to_offers" via package "Data::Company"
I looked at the classical Music::CD example, but I cannot figure out what I am doing wrong.
I agree with Manni, if your package declarations are in the same file, then you need to have the class with the has_a() relationship defined first. Otherwise, if they are in different source files, then the documentation states:
Class::DBI should usually be able to
do the right things, as long as all
classes inherit Class::DBI before
'use'ing any other classes.
As to the three-argument form, you are doing it properly. The third arg for has_many() is the column in the foreign class which is a foreign key to this class. That is, Offer has a CompanyId which points to Company's CompanyId.
Thank you
Well, the issue was actually not my code, but my set up. I realized that this morning after powering on my computer:
* Apache + mod_perl on the server
* SMB mount
When I made changes to several files, not all changes seems to be loaded by mod_perl. Restarting Apache solves the issue. I've actually seen this kind of issue in the past where the client and SMB server's time are out of sync.
The code above works fine with 1 file for each module.
Thank you
I really haven't got much experience with Class:DBI, but I'll give this a shot anyway:
The documentation states that: "the class with the has_a() must be defined earlier than the class with the has_many()".
I cannot find any reference to the way you are using has_a and has_many with three arguments which is always 'CompanyId' in your case.