How to get paths for relocatable schemas in Gio.Settings? - gtk

In Gio.Settings I can list relocatable schemas using
Gio.Settings.list_relocatable_schemas()
and I can use
Gio.Settings.new_with_path(schema_id, path)
to get a Gio.Settings instance. But how can I get all value for path that are currently used for a given schema_id?

Normally, a schema has as fixed path that determines where the
settings are stored in the conceptual global tree of settings.
However, schemas can also be ‘relocatable’, i.e. not equipped with a
fixed path. This is useful e.g. when the schema describes an
‘account’, and you want to be able to store a arbitrary number of
accounts.
Isn't the new_with_path just for that? You have to store the schemas somewhere associated with accounts, but that is not the responsibility of the Settings system. I think new_with_path is for the case where your schemas depend on accounts.
I think you can find more information with GSettingsSchemas - this is an example in the Description for a case where the Schema is part of a plugin.

Unfortunately you cannot do it from Gio.Settings.
I see two options here:
Keep separate gsetting to store paths of relocatable schemas
Utilize dconf API, which is a low-level configuration system. Since there is no Python binding (guessing it's Python question) I suggest using ctypes for binding with C.
If you know the root path of your relocatable schemas you can use below snippet list them.
import ctypes
from ctypes import Structure, POINTER, byref, c_char_p, c_int, util
from typing import List
class DconfClient:
def __init__(self):
self.__dconf_client = _DCONF_LIB.dconf_client_new()
def list(self, directory: str) -> List[str]:
length_c = c_int()
directory_p = c_char_p(directory.encode())
result_list_c = _DCONF_LIB.dconf_client_list(self.__dconf_client, directory_p, byref(length_c))
result_list = self.__decode_list(result_list_c, length_c.value)
return result_list
def __decode_list(self, list_to_decode_c, length):
new_list = []
for i in range(length):
# convert to str and remove slash at the end
decoded_str = list_to_decode_c[i].decode().rstrip("/")
new_list.append(decoded_str)
return new_list
class _DConfClient(Structure):
_fields_ = []
_DCONF_LIB = ctypes.CDLL(util.find_library("dconf"))
_DCONF_LIB.dconf_client_new.argtypes = []
_DCONF_LIB.dconf_client_new.restype = POINTER(_DConfClient)
_DCONF_LIB.dconf_client_new.argtypes = []
_DCONF_LIB.dconf_client_list.argtypes = [POINTER(_DConfClient), c_char_p, POINTER(c_int)]
_DCONF_LIB.dconf_client_list.restype = POINTER(c_char_p)

You can't, at least not for an arbitrary schema, and this is by definition of what a relocatable schema is: a schema that can have multiple instances, stored in multiple arbitrary paths.
Since a relocatable schema instance can be stored basically anywhere inside DConf, gsettings has no way to list their paths, it does not keep track of instances. And dconf can't help you either, as it has no notion of schemas at all, it only knows about paths and keys. It can list the subpaths of a given path, but that's about it.
It's up for the application, when creating multiple instances of a given relocatable schema, to store each instance in a sensible, easily discoverable path, such as a subpath of the (non-relocatable) application schema. Or store the instance paths (or suffixes) as a list key in such schema.
Or both, like Gnome Terminal does with its profiles:
org.gnome.Terminal.ProfilesList is a non-relocatable, regular schema, stored at DConf path /org/gnome/terminal/legacy/profiles:/
That schema has 2 keys, a default string with a single UUID, and a list list of strings containing UUIDs.
Each profile is an instance of the relocatable schema org.gnome.Terminal.Legacy.Profile, and stored at, you guess... /org/gnome/terminal/legacy/profiles:/:<UUID>/!
This way a client can access all instances using either gsettings, reading list and building the paths from the UUIDs, or from dconf, by directly listing the subpaths of /org/gnome/terminal/legacy/profiles:/.
And, of course, for non-relocatable schemas you can always get their paths with:
gsettings list-schemas --print-paths

Related

Best practice for using variables to configure and create new Github repository instance in Terraform instead of updating-in-place

I am trying to set up a standard Github repository template for my organization that uses Terraform to spin up new repos with the configured settings.
Every time I try to update the configuration file to create a new instance of the repository with a new name, instead it will try to update-in-place any repo that was already created using that file.
My question is what is the best practice for making my configuration file reusable with input variables like repo name? Should I make a module or is there some way of reusing that file otherwise?
Thanks for the help.
Terraform is a desired-state-configuration system, which means that your configuration should represent the full set of objects that should exist rather than an instruction to create a single object.
Therefore the typical way to add a new repository is to add a new resource block declaring that new repository, and leave the existing ones unchanged. Terraform will then see that there's a new resource not currently tracked in the state and will propose to create it.
If your repositories are configured in some systematic way that you can describe using a mechanical rule rather than manual configuration then you can potentially use the for_each meta-argument to declare multiple resource instances from the same resource block, using Terraform language expressions to describe the systematic rule.
For example, you could create a local value with a higher-level data structure that describes what should be different between your repositories and then use that data structure with for_each on a single resource block:
locals {
repositories = tomap({
example_1 = {
description = "First example repository"
}
example_2 = {
description = "Second example repository"
}
})
}
resource "github_repository" "all" {
for_each = local.repositories
name = each.key
description = each.value.description
private = true
}
For simplicity in this example I've only made the name and description variable between the instances, but you can add whatever extra attributes you need for each of the elements of local.repositories and then access them via each.value inside the resource block.
The private argument above illustrates how this approach can avoid the need to re-state argument values that will be the same for each declared repository, and have your local.repositories data structure focus only on the minimum attributes needed to describe the variations you need for your local policies around GitHub repositories.
A resource block with for_each set appears as a map of objects when used in expressions elsewhere, using the same keys as in the map given in for_each. Therefore if you need to access the repository ids, or any other attribute of the systematically-declared objects, you can write Terraform expressions that work with maps. For example, if you want to output all of the repository ids as a map of strings:
output "repository_ids" {
value = tomap({
for k, r in github_repository.all : k => r.repo_id
})
}

Python w/QT Creator form - Possible to grab multiple values?

I'm surprised to not find a previous question about this, but I did give an honest try before posting.
I've created a ui with Qt Creator which contains quite a few QtWidgets of type QLineEdit, QTextEdit, and QCheckbox. I've used pyuic5 to convert to a .py file for use in a small python app. I've successfully got the form connected and working, but this is my first time using python with forms.
I'm searching to see if there is a built-in function or object that would allow me to pull the ObjectNames and Values of all widgets contained within the GUI form and store them in a dictionary with associated keys:values, because I need to send off the information for post-processing.
I guess something like this would work manually:
...
dict = []
dict['checkboxName1'] = self.checkboxName1.isChecked()
dict['checkboxName2'] = self.checkboxName2.isChecked()
dict['checkboxName3'] = self.checkboxName3.isChecked()
dict['checkboxName4'] = self.checkboxName4.isChecked()
dict['lineEditName1'] = self.lineEditName1.text()
... and on and on
But is there a way to grab all the objects and loop through them, even if each different type (i.e. checkboxes, lineedits, etc) needs to be done separately?
I hope I've explained that clearly.
Thank you.
Finally got it working. Couldn't find a python specific example anywhere, so through trial and error this worked perfectly. I'm including the entire working code of a .py file that can generate a list of all QCheckBox objectNames on a properly referenced form.
I named my form main_form.ui from within Qt Creator. I then converted it into a .py file with pyuic5
pyuic5 main_form.ui -o main_form.py
This is the contents of a sandbox.py file:
from PyQt5 import QtCore, QtGui, QtWidgets
import sys
import main_form
# the name of my Qt Creator .ui form converted to main_form.py with pyuic5
# pyuic5 original_form_name_in_creator.ui -o main_form.py
class MainApp(QtWidgets.QMainWindow, main_form.Ui_MainWindow):
def __init__(self):
super(self.__class__, self).__init__()
self.setupUi(self)
# Push button object on main_form named btn_test
self.btn_test.clicked.connect(self.runTest)
def runTest(self):
# I believe this creates a List of all QCheckBox objects on entire UI page
c = self.findChildren(QtWidgets.QCheckBox)
# This is just to show how to access objectName property as an example
for box in c:
print(box.objectName())
def main():
app = QtWidgets.QApplication(sys.argv) # A new instance of QApplication
form = MainApp() # We set the form to be our ExampleApp (design)
form.show() # Show the form
app.exec_() # and execute the app
if __name__ == '__main__': # if we're running file directly and not importing it
main() # run the main function
See QObject::findChildren()
In C++ the template argument would allow one to specify which type of widget to retrieve, e.g. to just retrieve the QLineEdit objects, but I don't know if or how that is mapped into Python.
Might need to retrieve all types and then switch handling while iterating over the resulting list.

Data models generated by Sqlautocode: 'RelationshipProperty' object has no attribute 'c'

Using PGModeler, we created a schema and then exported out some appropriate SQL code. The SQL commands were able to populate the appropriate tables and rows in our Postgres database.
From here, we wanted to create declarative Sqlalchemy models, and so went with Sqlautocode. We ran it at the terminal:
sqlautocode postgresql+psycopg2://username:password#host/db_name -o models.py -d
And it generated our tables and corresponding models as expected. So far, zero errors.
Then, when going to ipython, I imported everything from models.py and simply tried creating an instance of a class defined there. Suddenly, I get this error:
AttributeError: 'RelationshipProperty' object has no attribute 'c'
This one left me confused for a while. The other SO threads that discuss this had solutions nowhere near my issue (often related to a specific framework or syntax not being used by sqlautocode).
After finding the reason, I decided to document the issue at hand. See below.
Our problem was simply due to bad naming given to our variables when sqlautocode ran. Specifically, the bad naming happened with any model that had a foreign key to itself.
Here's an example:
#Note that all \"relationship\"s below are now \"relation\"
#it is labeled relationship here because I was playing around...
service_catalog = Table(u'service_catalog', metadata,
Column(u'id', BIGINT(), nullable=False),
Column(u'uuid', UUID(), primary_key=True, nullable=False),
Column(u'organization_id', INTEGER(), ForeignKey('organization.id')),
Column(u'type', TEXT()),
Column(u'name', TEXT()),
Column(u'parent_service_id', BIGINT(), ForeignKey('service_catalog.id')),
)
#Later on...
class ServiceCatalog(DeclarativeBase):
__table__ = service_catalog
#relation definitions
organization = relationship('Organization', primaryjoin='ServiceCatalog.organization_id==Organization.id')
activities = relationship('Activity', primaryjoin='ServiceCatalog.id==ActivityService.service_id', secondary=activity_service, secondaryjoin='ActivityService.activity_id==Activity.id')
service_catalog = relationship('ServiceCatalog', primaryjoin='ServiceCatalog.parent_service_id==ServiceCatalog.id')
organizations = relationship('Organization', primaryjoin='ServiceCatalog.id==ServiceCatalog.parent_service_id', secondary=service_catalog, secondaryjoin='ServiceCatalog.organization_id==Organization.id')
In ServiceCatalog.organizations, it is looking to have the secondary table be service_catalog, but that variable was just overwritten locally. Switching the order of the two will fix this issue.

Accessing cache.dat through ODBC

Ok, so I am trying to extract the information from a cache.dat database sent from another business. I am trying to get at the data using the ODBC. I am able to see the globals from the samples namespace when trying to export to Access, but I can't get the data from this new database to show up.
I've tried to tackle this problem two ways. First, I simply shut down Cache, replaced the
existing database in InterSystems\TryCache\mgr\samples and restart cache. Once I restart I can see all the globals in the Management Portal from the new database. If I test the ODBC connection from the Windows ODBC administrator it connects. However, when I try to pull them into an access database using ODBC there are no tables showing up to import.
I've also tried to add the database to my Cache but it gave me the error:
ERROR #5805: ID key not unique for extent 'Config.Databases'
I tried to fool around with the values in there but to no avail. This is my first time messing with anything like this and any, ANY help would be awesome.
If you access the Management Portal do you see any table definitions defined for your namespace. If not, the application was written in CacheObjectScript with no Classes created to provide Object/SQL access. If this is the case then it could be a fair amount of work to create the classes that describe the data(global structures.)
Matt,
Did the business that provided the CACHE.DAT file indicate that you should have ODBC access to the data?
Did they provide some document describing the data/globals? If they provided a document that describes the globals you could create the classes that map the data. Depending on what you want to do this could either be a resource intensive process or not.
If you want to directly access globals you can create a stored procedure that will do so. You should consider the security implications before you do this - it will expose all data in the global to anyone with ODBC access.
Here is an example of a stored procedure that returns the values of up to 9 global subscripts, plus the value at that node. You can modify it pretty easily if you need to.
Query OneGlobal(GlobalName As %String) As %Query(ROWSPEC = "NodeValue:%String,Sub1:%String,Sub2:%String,Sub3:%String,Sub4:%String,Sub5:%String,Sub6:%String,Sub7:%String,Sub8:%String,Sub9:%String") [SqlProc]
{
}
ClassMethod OneGlobalExecute(ByRef qHandle As %Binary, GlobalName As %String) As %Status
{
S qHandle="^"_GlobalName
Quit $$$OK
}
ClassMethod OneGlobalClose(ByRef qHandle As %Binary) As %Status [ PlaceAfter = OneGlobalExecute ]
{
Quit $$$OK
}
ClassMethod OneGlobalFetch(ByRef qHandle As %Binary, ByRef Row As %List, ByRef AtEnd As %Integer = 0) As %Status [ PlaceAfter = OneGlobalExecute ]
{
S Q=qHandle
S Q=$Q(#Q) b
I Q="" S Row="",AtEnd=1 Q $$$OK
S Depth=$QL(Q)
S $LI(Row,1)=$G(#Q)
F I=1:1:Depth S $LI(Row,I+1)=$QS(Q,I)
F I=Depth+1:1:9 S $LI(Row,I+1)=""
S AtEnd=0
S qHandle=Q
Quit $$$OK
}
I don't have code for you to get this from access, but for reference, to access this from python you might use (with pyodbc):
import pyodbc
import win32com.client
import urllib2
class CacheOdbcClient:
connectionString="DSN=MYCACHEDSN"
def __init__(self):
pass
def getGlobalAsOverlyLargeList(self):
connection=pyodbc.connect(self.connectionString)
cursor=connection.cursor()
cursor.execute("call MyPackageName.MyClassName_OneGlobal ?","MYGLOBAL")
list=[]
for row in cursor :
list.append((row.NodeValue,row.Sub1,row.Sub2,row.Sub3,row.Sub4,row.Sub5,row.Sub6,row.Sub7,row.Sub8,row.Sub9))
return list

Regarding Eclispe EMF Command Frame WorK

Can any one tell me how to use AddCommand rather than `SetCommand" to do the following.
I have a class like this:
class Profile {
List achievements;
List grades;
List extracurrics;
}
Now, suppose I need to add a grade object to this Profile object,how can I achieve this by using AddCommand only
SetCommand is basically used to set values in EMF model, and AddCommand is used to modify collection values inside EMF model, so in general it should not be a problem to use AddCommand.
You can create new AddCommand using static creation function in AddCommand:
AddCommand.create(EditingDomain domain, EObject owner, EStructuralFeature feature, java.lang.Object value)
Explanation of given values:
domain: the editing domain your model lives in
owner: element you are doing the modifications to
feature: feature in model, that should be given to you by the EPackage of your model.
So this case is the Grades list feature
value: the new object you add to the list
There are many different create helpers in add command, so if you need to define index to list, it is also doable.
I don't have EMF running here, so I cannot provide any direct sources, but let me know if that didn't do the trick.
It should look something like this:
Profile p = ...;
Grade g = ...;
Command add = AddCommand.create(domain,p, YourProfilePackage.Literals.PROFILE__GRADES, Collections.singleton(g));
where YourProfilePackage should be in the code generated automatically from your EMF model.