Is it possible to add a Ticket Report to the Available Reports via a plugin, so that after installing the plugin it becomes automatically available? Or would one have to manually save a custom query with the intended columns and filters?
If it is possible to do this via python code, what is the Trac interface to be implemented?
You can insert the reports through an implementation of IEnvironmentSetupParticipant. Example usage in env.py. In env.py the reports are inserted in environment_created. You'll want to insert reports in upgrade_environment. Example code for inserting the report can be found in report.py. needs_upgrade determines whether upgrade_environment is called on Environment startup so you'll need to provide the logic that determines whether your reports are already present, returning False if they are present and True if they are not. If True is returned, then upgrade_environment will get called (for your case environment_created can just be pass, or it can directly call upgrade_environment, the latter being a minor optimization discussed in #8172). See the Database Schema for information on the report table.
In Trac 1.2 (not yet released) we've tried to make it easier to work with the database by adding methods to the DatabaseManager class. DatabaseManager for Trac 1.1.6 includes methods set_database_version and get_database_version. This is useful for reducing the amount of code needed in IEnvironmentSetupParticipant.needs_upgrade when checking whether the database tables need to be upgraded (even simpler would be to just call DatabaseManager.needs_upgrade). There's also an insert_into_tables method that you could use to insert reports.
That said, I'm not sure you need to put an entry for your plugin in the system table using set_database_version. You can probably just query the report table and check if your report is present, use that check to return a boolean from IEnvironmentSetupParticipant.needs_upgrade, which will determine whether IEnvironmentSetupParticipant.upgrade_environment gets called. If you are developing for Trac 1.0.x, you can copy code from the DatabaseManager class in Trac 1.1.6. Another approach can be seen with CodeReviewerPlugin, for which I made a compat.py module that adds the methods I needed. The advantage of the compat.py approach is that the methods can be copied from the DatabaseManager class without polluting the main modules of your codebase. In the future when your plugin drops support for Trac < 1.2 you can just delete the compat.py module and modify the imports in your plugin code, but not have to change any of your primary plugin logic.
Related
I would like to change the properties of multiple diagrams together rather than clicking on them one by one. Does anyone know how this can be achieved?
You can use the scripting facility of Enterprise Architect to loop the diagrams you would like to change and update them.
See this section of the manual to get help.
There is a bunch of example scripts included with EA, either from the local scripts, or from the EAScriptLib MDG.
Another source of examples is my Github repository: https://github.com/GeertBellekens/Enterprise-Architect-VBScript-Library
You could write a SQL to manipulate your database. t_diagram.PDATA holds a long cryptic string where one part is ScalePI=0; (which is the default for no scaling). You can alter that to be ScalePI=1; (meaning scale to one page).
String manipulations vary from database to database. So you need to write your own which you can execute in a script using
Repository.Execute("UPDATE t_diagram ...")
Note that you should test this in a sandbox first since invalid SQLs can easily disrupt your whole repository.
I would like to ask if there is a relationship between the Workflow and Importing of Files.
For example, the execution of workflow occurs when a record is saved, and it applies to updated records. And the action is to update a certain field on the target module if a specific field is changed. Say for example, Field A is updated to YES if field B is changed.
So it works well, when I manually saved the module after updating the field B.
How about during importing? Will the workflow would still matter? Provided that all conditions were successfully met.
I hope you could help me on this. I need to update our TS if there's a need for hard coding to support this.
Actually I have already posted this on the sugar forums. :D
Thanks so much!
Workflows are triggered with a before_save logic hook.
Logic hooks are triggered anytime the save() function is called
Creating records via the import process calls the the save() function.
So, yes, importing records will trigger your workflows.
The short answer is yes. There is no long answer.
Sugar uses the SugarBean class to save records, and it handles workflow. So if you save through Sugar, import, or use web services they all use SugarBean so therefore handle workflow.
I am trying to move sugarCRM data (Leads, Opportunities and Applicants, these are modules in sugarCRM) - I have the .csv files. No SQL.its hosted by this company and they won't give me the sql.
the issue is that leads for example has 212 columns(fields)
the regular sugarCRM has far less fields.
I am trying to figure out what is the best way to import all the data without having to use the Studio to create each field individually.
Opportunities module has 110 fields also on the hosted version - and the regular sugarCRM only has about 27.
so my question is how do I create all the fields so I can import them
I already created a file that gets the column names, and I did import all the data into a table called Leads1. when I rename it to leads and check ... the data doesnt show on the page.
any ideas? (please dont answer and say : ask the company to send you the sql, because they will not send it they know I want to move out of their hosted environment I already spent an hour on the phone and they wouldn't)
any ideas or suggestions would be greatly appreciated thank you
With or without the SQL you'll need to recreate the fields in Studio as you need the views to also include the fields. It's tedious, but the only real viable option in this case. It is important that the fields are named exactly the same when doing this so that the import works correctly.
If you can hack some code, another option is to create a module that will export the SQL for the whole database for you from within SugarCRM and also the whole file structure as a zip so that you don't have to recreate anything.
BTW - make sure that the SugarCRM instance you are moving to is the exact same version. Once you do the import then you can upgrade to your desired version. This guarantees that the DB structure will be the same then (given that the custom fields get created appropriately).
Good luck!
I would like to make a REST call to a report and provide the datasource as a parameter at runtime such as this:
http://somereporthost.com:8080/jasperserver/rest_v2/reports/reports/Recently_Created?datasource=ds_test&user=doej&begin_date=2012-12-04
Given this example, in the use case I have in mind, ds_test would already exist as would others (ds_test2, ds_test3) so that any datasource could be specified at runtime.
Is it possible to specify a datasource at runtime?
I have seen one thread which includes changing the datasource associated with a particular report but unless I misunderstood the solution, I see potential race-condition issues.
I saw another one which creates a copy of the report on the fly with the desired datasource but I think this would create the need for some housekeeping when reports are updated and seems to be overkill.
I'm currently using release 6.2 of JasperReports Server. I think the way this is supposed to work is by referencing user attributes, and defining as many execution users as datasources you need.
Please take a look at this answer:
https://stackoverflow.com/a/37926230/5731158
We are using NUnit to run our integration tests. One of tests should always do the same, but take different input parameters. Unfortunately, we cannot use [TestCase] attribute, because our test cases are stored in an external storage. We have dynamic test cases which could be added, removed, or disabled (not removed) by our QA engineers. The QA people do not have ability to add [TestCase] attributes into our C# code. All they can do is to add them into the storage.
My goal is to read test cases from the storage into memory, run the test with all enabled test cases, report if a test case is failed. I cannot use "foreach" statement because if test case #1 is failed, then rest of the test cases will not be run at all. We already have build server (CruiseControl.net) where generated NUnit reports are shown, therefore I would like to continue using NUnit.
Could you point to a way how can I achieve my goal?
Thank you.
You can use [TestCaseSource("PropertyName")\] which specifies a property (or method etc) to load data from.
For example, I have a test case in Noda Time which uses all the BCL time zones - and that could change over time, of course (and is different on Mono), without me changing the code at all.
Just make your property/member load the test data into a collection, and you're away.
(I happen to have always used properties, but it sounds like it should work fine with methods too.)