In simulink, when you clone a subsystem, after a while, you need to update them. The problem is that when you update a subsystem, the cloned one is not updated as well. Is there any way to synchronize the cloned subsystem in a way that when you change each of the subsystems, all original and clones get the influence?
In another word, I need several subsystems with only one identity.
This is what Libraries are for.
Yes, that's what libraries are for. You need to create your own library, then any change that you make to the block in the library will be propagated to all models using that block, provided that the library blocks are not disabled/broken. For more details, see Create Block Libraries in the Simulink documentation.
Related
Suppose I have a source code directory, I want to run a script that would scan the code in the directory and return the languages, frameworks and libraries used in it. I've tried github/linguist, its a great tool which even Github uses to detect the programming languages used in a source code, however I am not able go beyond that and detect the framework exactly.
I even tried tools like it-depends, to fetch the dependencies but, its getting messed up.
Could someone help me out to figure out how I can do this stuff, with an existing tool or if have to make one such tool how should I approach it.
Thanks in Advance
This is, in the general case, impossible. The halting problem precludes any program from being able to compute, in finite time, what other programs may or may not do - including what dependencies it requires to run. Sure, you can make it work for some inputs - but never for all.
So you have to compromise:
which languages do you need to support? it-depends does not try to support Java, for example. Different languages have different ways of calling in dependencies from their source-code. For example, if working with C, you will want to look at #includes.
which build-chains to you need to support? parsing a standard Makefile for C is very different from, say, looking into a Maven pom.xml for Java. Additionally, build-chains can perform arbitrary computation -- and again, due to the halting problem, your dependency-detection program will not be able to "statically" figure out intended behavior. It is entirely possible to link against one library or another one (or none at all) depending on what is detected to exist. What should you output in this case?. For programs that have no documented build process, you simply cannot know their dependencies. Often, the build-process is human-documented but not machine-readable...
what do you consider a library/framework? long-lived libraries can evolve through many different versions, and the fact that one version is required and not another may not be explicit in the source-code. If a code-base depends on behavior found in only a specific, now superseded, version of a library, and no explicit mention of that version is found -- your dependency-detection program will have no way to know about it (unless you code in library-version-specific detection; which is doable, but on a case-by-case basis, and requires deep knowledge of differences between versions).
Therefore the answer to your question is that... it depends (they go into a fair amount of detail regarding limitations). For the specific case of Java + Maven, which is not covered by it-depends, you can use Maven itself, via mvn dependency:tree. Choose a subset of the problem instead of trying to solve it all at once.
I want to know whether embedded coder be used to generate subsystem level code only and not the whole model, The model i am working on has multiple complex logic running on a memory constrained Real time system.
I want to generate the logic for a subsystem and analyze it, before proceeding to others.
Because these systems are interdependent,reducing complexity at a later stage in development would be a difficult task.
Yes, I believe so. if you right-click on a subsystem, you should have an option "Generate code" or similar.
I have a complex NAnt build script, which contains a lot of *.build and *.include files with many targets inside, which in their turn are called both via depends and via call. I'd like to have a visual representation in a tree-like form of what calls what. It should also be an easy way to regenerate it because the script is growing further.
Is there any ready-made tool or some API (preferably .NET-based) I can use for this purpose?
There's NAntBuilder although it seems to be expensive (with a free trial). I've never used it personally so I couldn't recommend it either way.
I've not found one, but my general mantra is "get your designer out of my face". Imagine a database diagram in Sql Management Studio or in the EF Design Surface if you've got 30 or 50 tables. Generally my mental map is better organized.
Probably the best way to initially visualize the dependencies is to run the build and watch the task names appear in the output.
I do have an iPad application which is customized for different customers in regards to color scheme, logos and other items.
I already created different targets where I defined #ifdef macros accordingly, and most variables are defined in a global.h file for easy maintanance.
Do you have any other useful suggestions I should consider at this time, especially as in the future there will be updates available - but not all new features will be avaialble to all clients?
Firstly, if you don't do that already, use the branching feature of your versioning system to handle different client needs, i.e. if one client wants an additional feature, don't (automatically) contaminate your master code base.
What you can also do is encapsulate all configurable features of your app. A very simple approach would be to create a configuration .plist or other kind of XML where you can easily configure adaptable features.
Mainly, if it's possible, try to extract all customizable features out of the primary code base. Load customizable data from easily editable files like XMLs, to make sure you don't accidently break something while configuring for a client.
You could totally abstract out all of the customizable elements and create a library of the remaining code. Each customer's App would then just instantiate that library and feed in their own customized elements. That way you don't have to change any code when you build the App for a new customer. You also get the benefit of having all the changes between customers located in one area and it also makes the code more testable.
If a customer doesn't get a feature, then you just feed in a null value (or equivalent)
This ideas is basically Inversion of Control
I like to convert reusable components into frameworks. This way you can expose customizable options and hide all of the complex code.
Another option would be to create a workspace. This would allow you to open projects within it and share their assets with the main workspace files.
Our company is currently writing a GUI automation testing tool for compact framework applications. We have initially searched many tools but none of them was right for us.
By using the tool you can record test-cases and group them together to test-suites. For every test-suite there is generated an application, which launches the application-under-test and simulates user-input.
In general the tool works fine, but as we are using window handles for simulation user input, you can't do very many things. For example it is impossible for us to get the name of a control (we just get the caption).
Another problem using window handles is checking for a change. At the moment we simulate a click on a control and depending on the result we know if the application has gone to the next step.
Is there any other (simpler) way for doing such things (for example the message queue or anything else)?
Interesting problem! I've not done any low-level (think Win32) Windows programming in a while, but here's what I would do.
Use a named pipe and have your application listen to it. Using this named pipe as a communication medium, implement a real simple protocol whereby you can query the application for the name of a control given its HWND, or other things you find useful. Make sure the protocol is rich enough so that there is sufficient information exchanged between your application and the test framework. Make sure that the test framework does not yield too much "special behavior" from the app, because then you wouldn't really be testing the features, but rather your test framework.
There's probably way more elegant and cooler ways to implement this, but this is what I remember from the top of my head, using only simple Win32 API calls.
Another approach, which we have implemented for our product at work, is to record user events, such as mouse clicks and key events in an event script. This should be rich enough so that you can have the application play it back, artificially injecting those events into the message queue, and have it behave the same way it did when you first recorded the script. You basically simulate the user when you play back the script.
In addition to that, you can record any important state (user's document, preferences, GUI controls hierarchy, etc.), once when you record the script, and once when you play it back. This gives you two sets of data you can compare, to make sure for instance that everything stays the same. This solution gives you tests that not easy to modify (you have to re-record if your GUI changes), but that provide awesome regression testing.
(EDIT: This is also a terrific QA tool during beta testing, for instance: just have your users record their actions, and if there's a crash, you have a good chance of easily reproducing the problem by just playing back the script)
Good luck!
Carl
If the Automated GUI testing tool has knowledge about the framework the application is written in it could use that information to make better or more advanced scripts. TestComplete for example knows about Borland's VCL and WinForms. If you test applications build using Windows Presentation Foundation has advanced support for this build in.
use NUnitForms. I've used them with great success for single and multi threading apps and you don't have to worry about handles and stuff like that
Here are some posts about NUnitForms worth reading
NUnitForms and failed DragDrop registration - problem of MTA vs STA
Compiled application exe GUI testing with NUnitForms
I finally found a solution to communicate between the testing-application and the application-under-test: Managed Spy. It's basically a .NET application build on top of ManagedSpyLib.
ManagedSpyLib allows programmatic access to the Windows Forms controls of another process. For this it uses Window Hooks and memory-mapping files.
Thanks for all who helped me to get to this solution!
Managed Spy does not provide a solution for compact framework applications.
The company Jamo Solutions (www.jamosolutions.com) meets the requirements for automation testing on mobile devices, including .net compact framework applications.