Deleting multiple labels in perforce - perl

I am working on a perforce-perl script that creates labels. Due to repeated execution of the script i have created hundreds of labels of the type LABEL_A.180 etc.
I want to know if there is any command or any other way by which I can delete multiple labels at a time?

There is no command or feature in P4V to delete multiple labels. The best approach will be just write another script that finds the labels and then removes them one-by-one.
I do not know the P4Perl API so I'm unable to give you an example, however it will be very similar to existing label creation just with an additional -d flag passed to the p4 label command.
HTH,

Related

How to tell what file a packet came from after files merged

I am dealing with a large number of pcap files from numerous collection sources. I need to programmatically filter and I am using tshark for that, so I am merging all the files together first using mergecap. The problem with that is I also need collection point information which is only available in the capture file name. I tried using editpcap to add in per-packet comments specifying original file however that is untenable (see below for explanation). Any ideas how to track the original file after pcap files merged?
why editcap solution won't work
I considered using editcap to add per-packet comments on every packet before merging (How to add a comment to all packets in numerous pcap files before merging into a single file) however the problem with this approach is that editcap requires every packet comment to be individually specified on the command line (you can't specify a range of packets). Thats hundreds of thousands of comments and the command line won't support that. Additionally, if I try to run editpcap with just a few comments at a time over and over it rewrites the entire file every time, leading to thousands of file rewrites. Also not viable.
If your original capture files are in .pcapng format, then each one contains an Interface Description Block or IDB. When you run mergecap to merge them, you can specify that IDB's not be merged using the -I none option. In this way, the interface number will be unique per original file and you can add a column that shows that information to easily differentiate the source of each packet by interface ID, or you can apply a display filter to isolate only those packets from a particular capture file.
The filter or column to use would be the frame.interface_id field, but you could also filter by frame.interface_name or frame.interface_description if those field values all have different values too, but there's no guarantee those fields will be unique as the interface name and/or description might contain the same information, even if the capture files originate from different machines.

Enterprise Architect: Setting run state from initial attribute values when creating instance

I am on Enterprise Architect 13.5, creating a deployment diagram. I am defining our servers as nodes, and using attributes on them so that I can specify their details, such as Disk Controller = RAID 5 or Disks = 4 x 80 GB.
When dragging instances of these nodes onto a diagram, I can select "Set Run State" on them and set values for all attributes I have defined - just like it is done in the deployment diagram in the EAExample project:
Since our design will have several servers using the same configuration, my plan was to use the "initial value" column in the attribute definition on the node to specify the default configuration so that all instances I create automatically come up with reasonable values, and when the default changes, I would only change the Initial Values on the original node instead of having to go to all instances:
My problem is that even though I define initial values, all instances I create do not show any values when I drag them onto the diagram. Only by setting the Run State on each instance, I can get them to show the values I want:
Is this expected behavior? Btw, I can reproduce the same using classes and instances of them, so this is not merely a deployment diagram issue.
Any ideas are greatly appreciated! I'm also thankful if you can describe a better way to achieve the same result with EA, in case I am doing it wrong.
What you could do is to either write a script to assist with it or even create an add-in to bring in more automation. Scripting is easier to implement but you need to run the script manually (which however can add the values in a batch for newly created diagram objects). Using an add-in could do this on element creation if you hook to EA_OnPostNewElement.
What you need to do is to first get the classifier of the object. Using
Repository.GetElementByID(object.ClassifierID)
will return that. You then can check the attributes of that class and make a list of those with an initial value. Finally you add the run states of the object by assigning object.RunState with a crude string. E.g. for a != 33 it would be
#VAR;Variable=a;Value=33;Op=!=;#ENDVAR;
Just join as many as you need for multiple run states.

How to filter out all amenities with osmfilter together with any other tag information they have?

I managed to download a planet file from OSM, and converted it to o5m format with osmconvert, and additional deleted all of the author information from it, to keep the file size smaller. I am trying to fetch every POI from this database from the whole world, so I am not interested in cities, towns, highways, ways, etc, only amenities.
First I tried to achieve this by using osmosis, which as it appears manages to do what I want, only it always runs out of memory, cause the file is too huge to process. (I could split the file up to smaller ones, but I would like to avoid this if possible).
I tried experimenting with osmfilter, there, I managed to filter out every node which has a tag named amenity in it, but I have several problems which I can't solve:
a. if I use the following command:
osmfilter planet.o5m -v --keep-tags="amenity=" -o=amenities.osm
It keeps all nodes, and filters out every tag which doesn't have amenity in it's name.
b. if I use this command:
osmfilter planet.o5m -v --keep-tags="all amenity=" -o=amenities.osm
It now filters out all nodes which doesn't have the amenity tag, but also filters out all additional tags from the matching nodes, which contain information that I need(for example, name of the POI, or description)
c. if I use this command:
osmfilter planet.o5m -v --keep-tags="all amenity= all name=" -o=amenities.osm
Filters out every node which has EITHER name or amenity in it's tags, which leaves me with several named cities or highways(data that I don't need).
I also tried separating this with an AND operator, but it says, that I can't use AND operator when filtering for tags. Any idea how could I achieve the desired result?
End note: I am running a Windows 7 system, so no Linux based program would help me:|
Please try --keep= option instead of --keep-tags option. The latter has no influence on which object will be kept in the file.
For example:
osmfilter planet.o5m --keep="amenity= and name=" -o=amenities.osm
will keep just that objects which have an amenity tag AND a name tag.
Be aware that all dependent objects will also be in the output file. For example, if there is a way object with the requested tags, each node of this way will be in the output file as well. The same is valid for relations and their dependent ways and nodes.
If you do not want this kind of behaviour, please add --ignore-dependencies
Some more information can be found here: https://wiki.openstreetmap.org/wiki/Osmfilter
osmfilter inputfile.osm --keep-nodes="amenity=" --keep-tags="all amenity= name=" --ignore-dependencies -o=outputfile.osm
this is exactly what you are looking for:
keep nodes with the tag amenity (--keep-nodes)
keep only info about amenities and names (--keep-tags)

matlab distributed computing with sge(qsub)

Recently I got access to run my codes on a cluster. My code is totally paralleizable but I don't know how to best use its parallel nature. I've to compute elements of a big matrix and each of them are independent of the others. I want to submit the job to run on several machine (like 100) to speed up the computation of the matrix.
Right now, I wrote a script to submit multiple jobs each responsible to compute a part of the matrix and save it in a .mat file. At the end I'm merging them to get the whole matrix. For submitting each individual job, I've created a new .m file (run1.m, run.2, ...) to set a variable and then run the function to compute the associated part in the matrix. So basically run1.m is
id=1;compute_dists_matrix
and then compute_dists_matrix uses id to find the part it is going to compute. Then I wrote a script to create run1.m through run60.m and the qsub them to the cluster.
I wonder if there is a better way to do this using some MATLAB features for example. Because this seems to be a very typical task.
Yes, it works, but is not ideal, and as you say is a common problem. Matlab has a parallel programming toolkit.
Does your cluster have this? If so, the distributed arrays is worth having a look at. If they don't have access to it, then what you are doing is the only other way. You can wrap your run1.m,run2.m in a controlling script to automate it for you...
I believe you could use command line arguments for the id and submit jobs with a range of values for this id. Command line arguments can be processed by launching MATLAB from the command line without the IDE and providing the name of the script to be executed and the list of arguments. I would think you can set up dependencies in your job manager and create a "reduce" script to merge the partial results (from files). The whole process could be managed from a single script that would generate the id & other necessary arguments and submit the processing & postprocessing jobs with dependencies.

Finding baselines associated with a CR in the Telelogic Synergy command line

How would one find baselines associated with a CR in Telelogic Synergy using the CLI interface? I have tried ccm query "cvtype='baseline' and cr('xxx')", but this doesn't produce any results.
From the GUI you can look at the properties of a baseline and see which CRs are associated with the baseline, but I can't seem to find the proper CLI magic to allow me to write a script to take a CR and list the baselines.
I think associations between a baseline and a CR are handled with relationships (ccm relate).
Search for "Predefined relationships" in the Synergy manual for a list of the existing relationship. When you know the name of the relationship you should then be able to use a query with the function has_relationship_name().
Change request is associated more with a RELEASE rather than a BASELINE. So the following query will help you get the RELEASE, you can further run another query to retrieve the baseline.
To retrieve the release for the
ccm.exe query -f "%release %modify_time %create_time" "cr('xxxxx')"
Once you retrieve the RELEASE and MODIFY_TIME, run a new query to get the BASELINES
ccm.exe query -f "%objectname %modify_time %create_time" "(cvtype='project') and (release='pppp/qqqq') and (modify_time>=time('1/30/13'))" -s integrate
This way you will get a narrower list of BASELINES you can work with, I know this may not be the answer you are looking for but it might help.