Is there any limitation on the number of forms in Delphi applications?
I developed an application with 40 or more Forms (with Delphi XE4), and I'm concerned about its performance!
Is it a good idea to create Forms on demand instead of creating all of them at application startup?
No, there is no limitation to the number of Forms, other than available system memory. Forms (and child components) are being kept in TList descendants. Theoretically, a TList hás its boundary, but you will hit the limit of system memory, window handle or GDI limits long before, guaranteed.
Yes, it is preferred to create Forms on demand. Creating all Forms at application startup unnecessarily slows down the startup and will consume unnecessary memory, because most likely many Forms will never be used in an application's session. Therefore you should always disable automatic form creation in the Form Designer Options of the Environment. A related issue concerns the global form variables that the IDE adds to form units by default: delete them immediately. Instead, use your own reference-holding mechanism for Forms created.
On existing projects where that option wasn't disabled, you should remove all forms - besides the Main Form - from the auto-create-forms listbox in the Form Options of the Project. Synonymous to this is removing all Application.CreateForm(...) lines from the project file.
Of course, there can be exceptions to this guideline of creating Forms on demand. Some Forms may be used often enough (and may be very expensive to create) to justify their creation once at startup and keeping them alive. Users are more accustomed to a somewhat long taking application startup then a long taking action when it is already active. In this case, keeping the global Form variable could make sense to express its never-ending existence.
I have project with 450 forms and 500 fast reports.I create forms on demand and release it on form close.application startup is 3 seconds.
Related
I have 2 closed-source application that must share the same data at some point. Both uses REST APIs.
An actual example are helpdesk tickets, they can be created on both applications and i need to update the data on one application when the user adds a new ticket/closes a ticket on the other application and vice versa.
Since is closed-source I can't really modify che code.
I was thinking I can create a third application that every 5 minutes or so, list both applications' tickets for differences on the precedent call, and if the data is different from the precedent call it updates the other application too.
Is there a better way of doing this?
With closed-source applications it's nearly impossible to get something out of them, unless they have some plugin-based setup that you can hook into.
The most efficient way in terms of costs would be to have the first application publish a message on a queue, or call a web-hook that you set, whenever the event is triggered. But as I mentioned, the application needs to support that.
So yeah, your solution is pretty much everything you can do for now, but keep in mind the challenges that you may encounter over time:
What if the results of both APIs are too large to be compared directly? Maybe you need to think about paging the results.
What if your app crashes and you loose the previous state? You need to somehow back it up in an external source
How often you should poll the API to make sure you're getting the updates you need, while keeping a good performance for the existing traffic?
We are building an application on top of Siddhi (using the Java library) that allows users to dynamically add rules and have all incoming information going forward be run against those rules. My question is if it's better to have one large app with many queries, streams, windows, and partitions, or to break up each query into it's own application.
We have been including everything in one single Siddhi app (SiddhiAppRuntime), but this is starting to become large and I fear things may start interacting with each other in unintended ways. We are also snapshotting the SiddhiAppRuntime and restoring state whenever our application gets restarted. This could likely lead to massive restores if we have hundreds of pattern queries to re-run.
I am considering making a separate SiddhiAppRuntime from a single SiddhiManager for each query. The benefits (as I see them) would be reduce the risk of unintentional interactions, make each query able to function on its own, and restoring the query after a shutdown should be much simpler since it will only need to restore a single query. Potential downsides could be increased overhead for having potentially hundreds of SiddhiAppRuntimes.
What is considered best practice for our scenario? What will offer better performance, both for running data through the rules and for restoring the rules in the case we have to restart.
(If this is too broad or any clarification is needed I will do my best to update this question accordingly)
From the lengthy description that you have given I assume these rules that users add does not interact with each other meaning rules add by user1 will not be interacting with rules added by user2.
In such a case it is recommended to use different Siddhi Apps(SiddhiAppRuntimes) for each user. This wont add much additional performance overhead as apps wont be interacting with each other. This will improve snapshoting process as we will be taking separate snapshopts per each app.
Also this will make sure you will have clear separation between each collection of rules and will be easily manageable.
I'm designing a MVVM application that does not use WPF or Silverlight. It will simply present web pages in HTML5, styled with CSS3.
The domain is a perfect case for using WF because it involves a number of activities in a long-running process. Specifically, I am tracking the progress of interactions with a customer over a 30 day period and that involves filling out various forms at points along the way, getting approvals from a supervisor at certain times, and making certain that the designated order of activities is followed and is executed correctly.
Each activity will normally be represented by a form on a view designed to capture the desired information at that step. Stated differently, the view that a user sees will be determined by where she is in the workflow at that moment.
My research so far has turned up examples where the workflow is used to execute business logic in accordance with the flowchart that defines it.
In my situation, I need for a user to login then pick up where she left off in the workflow (for example, some new external event has occurred and she needs to fill out the form for that or move forward in the workflow to that step.)
And I need to support the case where the supervisor logs in and can basically be presented with activities that need approval at that time.
So... it seems to me that a WF solution might be appropriate, but maybe the way I want to use it is inverted - like the cart pulling the horse so to speak.
I'd appreciate any insight that anyone here can offer.
Thanks - Steve
I have designed an app similar to yours, actually based on WPF, but the screens shown by the application are actually driven by workflows.
I use a task-based approach. I have some custom activities that create user tasks on a DB. There are different type of tasks, one for every different form type that the application supports. When the workflow reaches one of these special activities, the task is saved to DB and the WF goes idle (bookmark).
Once the user submits the form, the wf is resumed up to the point where another user task is reached and so on.
Tasks can be assigned to different users along the way (final user, supervisor, ..) and they have a pending tasks list where they can resume previous wf instances, etc.
Then, to generate user views (HTML5 forms in your case) you have to read the pending task and translate that into the corresponding form.
Hope you find it useful
This question was inspired by an earlier question i have asked here, I have learned from that question that DbContext instances should be short living dependencies. Now given that i develop LOB desktop applications with local databases using SQL CE i have a few questions:
In my case (local db, single user, desktop app), should DbContext really live for a short-period of time ?
if i disposed of my DbContext with every operation, would that make me lose all the tracking information gathered through out its life cycle ?
if the answer to 2 is true (trouble!), how to go about doing it the right way, should i develop a UnitOfWork that keeps change tracking information or what ?!
Old quesiton, but it can help to someone maybe.
As described in this article, the living of the dbContext object depends weather it is used in web or desktop app.
Web Application
It is now a common and best practice that for web applications,
context is used per request.
In web applications, we deal with requests that are very short but
holds all the server transaction they are therefore the proper
duration for the context to live in.
Desktop Applications
For desktop application, like Win Forms/WPF, etc. the context is used
per form/dialog/page.
Since we don’t want to have the context as a singleton for our
application we will dispose it when we move from one form to another.
In this way, we will gain a lot of the context’s abilities and won’t
suffer from the implications of long running contexts.
Basically, context should be short living object, but always with the right balance.
1) Yes, short is good. But every user input/interaction is extreme
2) Clearly yes. But beyond a logical unit of work from a Client interaction, the pattern of discarding the context fits in well. eg Change an order. Perhaps Header, Items and cust loaded. New address added to cust, Order header changed and SaveChanges. New logical interactions starts on client. Dont forget you can have several smaller contexts. Indeed bounded contexts are key to performance. Perhaps you have 1 long running context with system config and other such settings that are non volatile, few in number but accessed very often. I would keep such a context for longer.
*3)*Not sure exactly what the question is. But a LUW type of Class that has a method Commit and then disposes the context is 1 such pattern.
Dont forget to generate Views on DbContexts if reloaded often.
I'm making a single page application, and one approach I am considering is keeping all of the templates as part of the single-page DOM tree (basically compiling server-side and sending in one page). I don't expect each tree to be very complicated.
Given this, what's the maximum number of nodes on the tree before a user on a mediocre computer/browser begins to see performance degradation? For example, 6 views stored as 6 hidden nodes each with 100 subnodes of little HTML bits.
Thanks for the input!
The short of it is, you're going to hit a bandwidth bottleneck before you'd ever hit a DOM size bottleneck.
Well, I don't have any mediocre machines lying around. The only way to find out something like that is to test it. It will be different for every browser, every CPU.
Is your application javascript?
If yes, you should consider only loading in the templates you need usinx XHR, as you're going to be more concerned with loadtime for mobile than performance on a crappy HP from 10 years ago.
I mean, hearing what you describe should be technically reasonable for any machine of this decade, but you should not load that much junk up front.
A single page application doesn't necessitate bringing all the templates down at once. For example, your single page can have one or more content divs which are replaced at will dynamically. If you're thinking about something like running JSON objects thru a template to generate the HTML, the template could remain in the browser cache, the JSON itself stays in memory, and you can regenerate the HTML without any issue and avoid the DOM size issue.