Main difference between metadata and vardefs - sugarcrm

It seems simple, but I'm not sure I understand the difference. Both metadata files and vardefs contain field definitions and both ultimately point to Sugarfields
So what it's the difference between the 2 when it comes to field definitions?

The difference betwene metadata and vardefs is,
metadata is a simple table structure definition so sugarcrm knows what fields/tables to add to the database on repair&rebuild.
Vardefs are field-definitions inside modules, also resulting in db fields in the table of the module, but vardefs can't exist without a module, metadata can (only) exist without a module.

Related

what possible ways to include external tables in TYPO3

Since TYPO3 uses doctrine it is possible to use tables from multiple databases in one instance (with some restrictions like no joins).
But what is possible at all?
At the moment I need two external tables for an extension and instead of using them directly I import them to work locally as usual. But the importing has some draw backs.
Draw backs I can accept:
the data is not live (changes to the external tables are imported later)
the data is read only (changes are done externally anyway)
For importing I use ext:external_import but there are some problems as not all data can be imported in a single run, and then there are errors (e.g. there are reports about duplicate keys, alas there are no duplicate keys in the external tables)
On the other hand I doubt I can use the external tables directly as they have not the usual TYPO3 structure (fields: 'uid', 'pid', 'tstamp', ...). (Maybe they can be mapped in a view?) (of course in the tables I import the data into these fields exist)
Also external changes may be unnoticed and cached content does not reflect current data. In my case that would be a minor problem, as we currently already have no 'live' data, but this needs to be cleaned regularly for cache and for the search index (solr).
What are possible solutions? ? (do they depend on the TYPO3 version?)
What are your experiences?
EDIT:
While trying to realize it considering the given answers more doubts appear:
the tables are readonly (as they are changed from outside):
How do I declare it to TYPO3?
the tables does not follow the usual name rules, especially one table is named sys_category which in this way conflicts with the TYPO3 table sys_category.
Can I build a mapping inside of TYPO3?
Can I build a view from TYPO3 for renaming tables and fields?
like:
CREATE View tx_myext_category
SELECT id as uid, name as title, ...
FROM databasename.sys_category;
Yes, you can fetch data directly from other databases/tables. Of course it highly depends on the usecases and the data you get:
It works fine to read/write data by using the queryBuilder and all the APIs you know from https://docs.typo3.org/m/typo3/reference-coreapi/master/en-us/ApiOverview/Database/Index.html like ConnectionPool, QueryBuilder
If you want to show the data in the formengine, e.g. list module, you will need to have the minimum columns like uid, pid and a valid TCA as well.
From my experience, the mapping mechanism only works if the external table has a almost similar structure as TYPO3 tables. You need at least a uid field on the external side. This cannot be mapped! A missing pid field could be managed with on the TYPO3 side, also crdate or tstamp if needed. Just fill the local data array with the values TYPO3 needs.
Problems arise if you have relations to deal with. Many external systems have other ways to handle relations. You could run into many problems if you try to rely only on the mapping mechanism.
Other problems are fields with date format. Most external tables in the MS world use another format as the unixtime.
If you run into problems with the mapping mechanism you can switch to the TYPO3 queryBuilder. This is a powerful fallback. I experienced problems only with a special type of JOIN statements.
But with the TYPO3 queryBuilder you are on your own. You place instances of the queryBuilder code in the repository and add your model code as usual: thus you can continue to work with Fluid in the frontend as you are used to.
ANSWER TO EDIT:
With the TYPO3 queryBuilder readonly tables aren't a problem. Just don't implement the setter classes in your models.
With TYPO3 queryBuilder you can call any external table with any name. You have full control over the output data in your repository because the mapping is handled inside of it.
As far as I know, there is no way to create SQL views in TYPO3 up to v9, neither with the DBAL mapping mechanism nor with the TYPO3. queryBuilder.

Changing field names in edmx file in a smart way

I've just noticed how terribly tough Entity Framework makes simple task of changing name of some field in some table in the model. There are following difficulties. Renaming (or changing type) of field using GUI:
doesn't change the mapping
doesn't rename names in generated models, but regenerates them
So after renaming a field we need to manually update edmx's xml and deal with all references to old name of the generated POCO in all places of our project.
Is there any way to do it smarter? Are there any tools for it?
(I'm using EF 4)

How to achieve N:M Relations with Attributes in Extbase?

i was just trying to achieve a N:M relation between two of my Domain Models with an attribute.
I tried this Tutorial ( sorry, it's german, but the code is fine. ) and everything works fine in the backend and the database (datarecords are created correctly, relations are visible in backend/tca config).
BUT: When I try to create a Model/Repository/Controller/Plugin ( all with minimal basic configurations, just for testing the output, so nothing fancy here ) and try to output my "firma" with the RepositoryMethod->findAll() I get an Error in SQL Syntax.
Extbase seems to access the wrong tables. Normally the SQL statement should ask for the mitarbeiterid/firmaid in the relation table. But the SQL-Error reveals that Extbase tries to find the column "firmaid" / "mitarbeiterid" in the "mitarbeiter"-table, where those columns do not exist.
Does anyone of you know if this can be fixed, or am I missing something from the tutorial (I'm aware that the first tutorial has some typing errors, but that'S not the problem :( ) ? I tried another tutorial IRRE Tutorial which is basically the same, just a bit more extended. Same SQL-Error in here. What has to be done to get some output in the frontend of these datarecords ?
Thanks in advance.
The tutorial seems to be outdated. seems to be more up to date and actually using extbase/fluid.
But lucky you, its not that hard to achieve what you are aiming for. You need to check this list. Make sure that
Your class names, table names and folder structure are in synch with extbase expectations
You have two domain models of which both have a property that contains a objectStorage that contains instances of the other object
You have configured your TCA for both tables to use a mm table for the property containing the objectStorage

Managing changes in class structure to be consistent with mongodb collection

We are using mongodb with c#. We are trying to figure out a way to keep our collection consistent seamlessly. Right now, if a developer make any changes to the class structure(add a field or change data type or changing the property within a nested class) he/she has to change the mongo collection manually.
Its a pain as our project is growing and the developers working on the project keeps increasing. Was wondering whether someone already have figured out a way to manage this issue.
Research
I found a similar question. however, couldn't find the solution.
Found a way to find all properties Finding the properties; however, datatype and nested documents becomes an issue.
If you want to migrate gradually as records are accessed you need to follow a few simple rules:
1) If you add a field it had better be nullable or have a default value specified.
2) Never rename fields, never change field types
- Instead always add new fields, add migration code, remove the old fields only when all documents have been migrated over.
For prototyping with MongoDB and C# I build a dynamic wrapper ... that lets you specify your objects using only interfaces (no classes needed), and it lets you dynamically add new interfaces to an existing object. Not ready for production use but for prototyping it saves a lot of effort and makes migration really easy.

Is it possible to have multiple Entity Framework edmx's with a shared connection string?

My concept is to have a logging/audit edmx file with corresponding mapped types defined in one project. This edmx has concepts and classes like AuditTrail and PropertyChange
A second edmx for the actual application models, domain if you will, with classes like Product, Category, and Order.
What I want to do is "scoop up" the first auditing edmx file into the second domain edmx. The schema information is 100% the same, the database has tables from both.
What I want to have happen is that these two edmx files are combined in such a way that I can use a transaction to save data such as both are dependent on each other finishing. My audit information can't save without my domain information and vice versa.
I've been goggling around and this seems possible I'm just missing some implementation detail thats not bringing this together.
This should be as simple as pointing the domain edxm and objectcontext connection string to the auditing edmx's csdl, ssdl, and msl files? The goal would be to load all of the MetaData information inside a single instance of an ObjectContext so I can wrap a call to both with a transaction.
This is what I have in my connection string for the web app/domain part of this application:
connectionString="metadata=res://*/Models.CfarModels.csdl|
res://*/Models.CfarModels.ssdl|
res://*/Models.CfarModels.msl|
W:\map\AuditModels.csdl|
W:\map\AuditModels.ssdl|
W:\map\AuditModels.msl;
Am I on the right track here is is this impossible?
While it is possible to load multiple CSDL files into one (Entity)Connection, it is not possible to load multiple MSL or SSDL files, because these are completely self-contained.
Back to the CSDL, there is a rarely used <using> element in CSDL. Which might give the impression that it is similar to a Reference in .NET, but actually it is more like a Merge.
I.e. one CSDL in another actually modifies both, merging them together, and potentially invalidates MSL's and SSDL's, by making them incomplete.
So back to your scenario:
You could in theory have 2 CSDLs:
1) To describe the Audit types
2) That uses (1) & defines the Domain Types and defines EntitySets and AssociationSets for both kinds of types.
You would then have one mapping file to map (2) and a storage model file too.
Which would leave you with something like this:
"metadata=res://*/Models.CfarModels.csdl|
res://*/Models.CfarModels.ssdl|
res://*/Models.CfarModels.msl|
W:\map\AuditModel.csdl;
Personally though I don't think you gain enough from this separation to make it worthwhile, most of the metadata is in the extents, the mapping and the storage model anyway, so type re-use while nice is only about 20% of the work anyway.
All you can really re-use is the audit type definitions, but that probably isn't worth the effort.
Hope this helps
Alex James
Microsoft.