Two questions:
Is there a way to paste a composition directly into a chosen bin? I
always paste it and have to scroll in the project viewer until I
find it, then drag it to the wanted bin.
Is there an easy way to "deep copy" a composition (meaning, making a copy of the inner compositions as well)?
For the question 2 :
"deep copy" a composition
If you mean duplicate a composition and its children, even compositions, a script called True Comp Duplicator already exists and you can read through the code if you download it. Go see function tcd_duplicateCompStructure(comp, tcd_depth) line 738 in True Comp Duplicator.jsx and read the code.
Related
I am working on data factory and was wondering if there are any activities to just "move files" without actually reading them rather than "copy data" (which seems like does a read operation)?
I am trying to move files if any exist from one folder to another and if there are many files, since copy data reads each file, it makes the process slow.
Any suggestion. This is how my current data source looks like and all I want to do is, if there is any csv file exists at the location move it without reading it per say.
So here is a MSFT link I followed to move files.
https://learn.microsoft.com/en-us/azure/data-factory/solution-template-move-files
This tutorial was not very detailed when it comes to explaining everything. Like it assumes that the user needs parameters. I did as it said but my datasets were pointing to exactly where files need to be picked up and land, so I left the parameters empty. Debugging or running the trigger didn't move a file.. solution didn't work.
I had to remove the parameters created in the template to make this work. In case its helpful to some. File move started happening after that.
So lesson learned, empty parameters wont work. If you don't need them remove them.
Also, I watched this tutorial in case its helpful to some one.
https://www.youtube.com/watch?v=u_X_f4z8zoQ
I have hundreds of Catia V5 catparts and catproducts in a folder on hard disc. I want to know if a particular catpart is used in some catproduct or not. If it is not used in any product, I want to delete it and clean my hard disc. One way to do it is to open all catproducts one by one and check carefully they contain this model. This is cumbersome process and can lead to serious mistakes. Is there some automatic way to check it? If not, is it possible to write some macro for that purpose?
It is possible with a VBA script. If it's just Catpart file that your looking for in a product, then your script would work as follows
query your folder(s) for all catparts and catproducts.(use 2 dictionaries or arrays, one for each file type each)
Via a loop, Individually open and load each catproduct and essentially walk the tree and compare each child Catpart to your compiled list of catparts. If a match is found, movethe part to a new "white list"(dictionary or array)
Close the catproduct and check the next one.
Then, when all done, your original list(dictionary or array) will be your unused parts.
I'm not sure exactly how your models are built, but you may need to check for additional references/links in your catproducts (additional logic) before doing something like this.
I try to build modular, reusable code in labview.
I want to create a UI component that allows me to select one of the files or directories in a given directory.
I created a subvi that does this. So far so good.
I can use this subvi as a component in other vis, by putting it into a subpanel.
I want to have several of such subpanels with an "instance" of the subvi in my main vi.
I cannot do this. Labview opens the subvi in one subpanel and throws an error for opening it in another one.
How can I tell Labview to create a duplicate/new "instance" of the subvi that runs independently from the any other one?
I found out that xcontrol are probably a better approach to creating components, but they are not available to me, whether they solve the above problem or not.
Labview 2013
You need to configure the subvi to be reentrant.
This allows LabVIEW to allocate data space for each instance.
There are different types of reentrancy, I would stick with the pre-allocated option to start.
http://zone.ni.com/reference/en-XX/help/371361J-01/lvconcepts/reentrancy/
I have a custom toolbox with a foo element.
I would like the foo to be green on an class diagram and red on flow chart diagram by default.
Adding more than one stereotype to non- UML type is impossible (as far as I know).
Is it possible to create 2 toolboxes- one for class diagrams and one for flow charts, specifying the default diagram for each toolbox in the profile?
Not quite the way you describe it.
Toolboxes don't specify which diagram they open, it's the other way around: you create a custom diagram type and associate it with a toolbox. Different custom diagrams may use the same custom toolbox.
You can create two custom diagram types, one for class ("Logical") and one for flow chart ("Activity"), but if you're only after getting the same stereotyped element (foo) to display differently in the diagrams, you don't need to.
The solution is to create a shape script for the stereotype, which checks the diagram type and changes the color accordingly. The diagram type can be queried from the shape script using the diagram.type property (for the base UML diagram type), or diagram.mdgtype (for the custom diagram type, if you've defined one). There is no need to create an Add-In, as another answer suggests, at least not in EA 11.
Check the help file under Extending UML Models -- MDG Technology SDK -- Shape Scripts -- Write Scripts -- Display Element/Connector Properties.
A simple script might look like this:
shape main {
if (hasproperty("diagram.type", "Logical")) {
setfillcolor(0, 255, 70);
} else if (hasproperty("diagram.type", "Activity")) {
setfillcolor(255, 87, 87);
}
drawnativeshape();
};
No. You need to have two different stereotypes. The target diagram is independent of the element. If you want the element appear different on the type of the diagram where you use it you need to adapt the shape script so it calls an add-in which detects the diagram type.
Well, writing the last sentence I would not know how to detect the diagram where the element in question is in. Needs investigation. But other than that - no solution I know of.
Edit: Since the add-in just receives the element GUID it has no way to figure out the diagram from where the call is made. Probably worth a feature request. But the time where we saw those realized in the next build are gone (since more than 10 years).
A last though: template packages. I almost never used them. Maybe they offer coloring depending on diagram/element.
Edit2: Last resort EA_OnPostNewDiagramObject. Catch that and you can get all information you need to apply the color.
Can anyone share his experience on workflow for R peject development under ESS? I tried several times to learn emacs but I have not get it yet. I can understand ESS as an editor, but is there a project view in ESS? what's the efficient ways to set up/view R project directory, coding, and testing, and how's ESS has an edge to facilitate the whole process?
Do you use ESS as a good R editor only or tend to emulate a R IDE environment within ESS?
Thanks for any advices.
It sounds like you're asking two separate questions.
One question concerns workflow and the other concerns using ESS.
As I use StatET and Eclipse, I'll just share my experience regarding the workflow aspect of your question.
As with Vincent I also follow something like the workflow set out by Josh Reich here (also see Hadley's useful comments):
Workflow for statistical analysis and report writing
Although it can vary between projects, I tend to have a couple of main R files
import.R: this imports data files and does any necessary cleaning and manipulation
analyse.R: This generates the output that I need for any final report
main.R: This calls import.R and analyse.R
The aim is for import.R and analyse.R to represent the complete and final workflow for producing the final results of any analyses.
In terms of a directory structure for an analysis project, I'll often also have the following folders
data: for storing any raw data files
meta: for storing meta data, such as variable labels, scoring systems for tests, recoding information, etc.
output: for storing any graphics, tables, or text generated by my analyses that I might want to incorporate into an external program
temp: When exploring the data and brainstorming analyses, I like to type code into files instead of using the console. I tend to label these temp1.R, temp2.R, temp3.R. I store these in a temp folder. That way I have a permanent record that's easily accessible. If the analyses become final they get incorporated into one of the main R files (i.e., import.R or analysis.R)
functions: If I think that a function will be needed across a couple of projects, I often place it one function per file or a set of related functions in a file in a folder called functions. This makes it relatively easy to reuse functions across projects, when the formal requirements of package development are more than needed.
library: If I want to create some general functions that I think will be project specific, I'll place them in this folder
save: A folder to store any saved R objects
StatET and Eclipse make it easy to interact with such a file system.
Of course, given all the R gurus that use ESS and Emacs, I'm sure it also handles interactions with the file system well.
I'm not exactly sure what you expect as an answer on this one. I, for one, have stolen (and adapted) a system that was suggested here a little while ago (by Josh Reich):
Create a folder for every project, and split up your work in a bunch of different .R files:
Load.R for getting your raw data into R;
Prep.R for cleaning the data, recoding variables, etc.;
Func.R for coding any custom functions you will need for evaluation; and
Eval.R for running your final stuff.
If that doesn't fit your style, just change it.
Then, you can either have a master file to call each of the parts one after each other (good for reproducibility), or save at different stages and have the individual scripts load the appropriate data (good if some of the prep work is very computationally/time intensive).
**
On a different note, the trick that is posted at the link really helped me get into ESS. It turns Shift-Enter into a one-stop-ESS-shop: http://www.kieranhealy.org/blog/archives/2009/10/12/make-shift-enter-do-a-lot-in-ess/
Others have given you some good ideas about how to setup your directory/file structure for a project.
You also asked about "project views," in which case you might want to look into the Emacs Code Browser (ECB).
You can find some screen shots of it in action on its site, here:
http://ecb.sourceforge.net/screenshots/index.html