EA Connector Creation - enterprise-architect

I am creating a Aggregation connector though Add-In. I am able to create the connector without Strong target end point by using the below mentioned code.
EA.Connector connector = signalEle.Connectors.AddNew("", "Aggregation");
connector.SupplierID = parentElement.ElementID;
connector.Subtype = "Strong";
connector.StyleEx = "LFEP=" + strEleName.AttributeGUID + "L;";
connector.ClientEnd.Role = strEleName.Name;
connector.Update();
How to create the connector with Strong target end ?

EA strikes again. Instead of setting subType to "Strong" you need to do this:
ce = connector.clientEnd;
ce.Aggregation = 2;
ce.Update();
Or if vice versa use supplierEnd instead. The subType seems to be ignored in this case.

Related

What is the purpose of the StateSet variables seen in OpenModelica's result variable browser

When I simulate the model below, I get additional variables labelled $STATESET1, which are obviously auto-generated.
What is the purpose of these variables from the perspective of the user? Generally I am only interested in the solution, not in the specific strategies a specific solver achieved it with, right? So isn't this more like something that should be output only if one turns on model debugging of some kind rather than being something the average OpenModelica user can take advantage of? What if there is more than one "state set" (say $STATESET1 and $STATESET2): how am I supposed to know how these variables relate to my model, given their generic names? More specifically, what is $STATESET1.x[:]? Nothing in the original or flattened model gives a hint on this...
model StateSetTest
import SI = Modelica.SIunits;
Real[3] q(start = zeros(3), each fixed = true);
Real q4(start = 1);
Real[3] w(start = zeros(3), each fixed = true);
SI.Torque[3] TResult;
equation
q * q + q4 * q4 = 1;
w = 2.0 * (q4 * der(q) - der(q4) * q - cross(der(q), q));
der(w) = TResult;
TResult = zeros(3);
end StateSetTest;
They are used for dynamic state selection, i.e. changing the state during the simulation. And yes, they are not really needed for the user. I guess we could filter them out from OMEdit. I'll open a ticket about this.

how can i add rollup functionality in sugarcrm(ce)

Can any one tell how to add roll up functionality in sugarCRM(Ce).
Our requirement is to "sum of project amounts to roll up to opportunity amount field in sugar crm"
You can achieve it by writing after_save logic hook as described below:
I have achived similar functionality where sum of pending amount of each cases will be store in accounts module.
$customer_id = $_REQUEST['mc_companyusers_cases_1mc_companyusers_ida'];
if($customer_id){
$rs = $bean->db->query("SELECT cc.pending_payment_c FROM mc_companyusers_cases_1_c m inner join cases c on m.`mc_companyusers_cases_1cases_idb` = c.`id` inner join cases_cstm cc on cc.`id_c` = c.`id` where m.`mc_companyusers_cases_1mc_companyusers_ida` = '".$customer_id."'");
$total_pending_amount = 0;
while($row = $bean->db->fetchByAssoc($rs)){
$total_pending_amount += $row['pending_payment_c'];
}
$bean->db->query("Update mc_companyusers_cstm set total_pending_payment_c='".$total_pending_amount."' where id_c='".$customer_id."'");
}
So you can map project module with cases and opportunity module with account in above query.
Thank you.
You could add a field with a function that dynamically calculates the sum.
Or use a logic hook that adds to a real db field whenever a submodule item gets added.

XMeans ELKI fails at every third input file

I'm trying to cluster image data (stored in 100 separate csv files) with ELKI's XMeans algorithm. It works well for the first two files, but then the algorithm hangs on forever while processing the third file. It looks like the problem occurs at every 3rd file or so, because when I start the loop, that goes over all files at the fourth file, it works for the fourth and the fifth file, but not for the sixth file. Same goes for the 9th and 11th file... but maybe that's coincidence.
My XMeans call looks like this:
DatabaseConnection dbc = new ArrayAdapterDatabaseConnection(data);
Database db = new StaticArrayDatabase(dbc, null);
db.initialize();
Relation<NumberVector> rel = db.getRelation(TypeUtil.NUMBER_VECTOR_FIELD);
DBIDRange ids = (DBIDRange) rel.getDBIDs();
SquaredEuclideanDistanceFunction dist = SquaredEuclideanDistanceFunction.STATIC;
RandomlyGeneratedInitialMeans init = new RandomlyGeneratedInitialMeans(RandomFactory.DEFAULT);
KMeansInitialization initializer = new FirstKInitialMeans();
PredefinedInitialMeans splitInitializer = new PredefinedInitialMeans(data);
KMeansQualityMeasure informationCriterion = new WithinClusterMeanDistanceQualityMeasure();
RandomFactory random = new RandomFactory(123);
KMeans<NumberVector, KMeansModel> innerKMeans = new KMeansHamerly<>(dist, 50, 1, init, true);
XMeans<NumberVector, KMeansModel> xm = new XMeans<>(dist, 5, 50, 1, innerKMeans, initializer, splitInitializer, informationCriterion, random);
Clustering<KMeansModel> c = xm.run(db, rel);
I'm not too sure about these four lines, so maybe that's why it works for some files and for others it doesn't:
KMeansInitialization initializer = new FirstKInitialMeans();
PredefinedInitialMeans splitInitializer = new PredefinedInitialMeans(data);
KMeansQualityMeasure informationCriterion = new WithinClusterMeanDistanceQualityMeasure();
RandomFactory random = new RandomFactory(123);
data is just a double[][] which contains the data from the input files.
Any help would be very appreciated!
Please, use the Parameterization API to configure X-means.
Because of the nested k-means, it is very easy to configure things badly.
The initializer of the inner k-means class must be set to this:
PredefinedInitialMeans splitInitializer = new PredefinedInitialMeans((double[][]) null);
KMeans<NumberVector, KMeansModel> innerKMeans = new KMeansHamerly<>(dist, 50, 1, splitInitializer, true);
because otherwise X-means currently cannot control the initialization of the inner algorithm. I will remove this parameter, and have XMeans set the initializer of the inner algorithm.
Without a stack trace (as mentioned by #Anony-Mousse) it is hard to say what is happening. My best guess is that this meta-algorithm (an algorithm that runs another algorithm!) is not correctly configured and maybe chooses bad initialial values?

Entity Framework - TOP using a dynamic query

I'm having issues implementing the TOP or SKIP functionality when building a new object query.
I can't use eSQL because i need to use an "IN" command - which could get quite complex if I loop over the IN and add them all as "OR" parameters.
Code is below :
Using dbcontext As New DB
Dim r As New ObjectQuery(Of recipient)("recipients", dbcontext)
r.Include("jobs")
r.Include("applications")
r = r.Where(Function(w) searchAppIds.Contains(w.job.application_id))
If Not statuses.Count = 0 Then
r = r.Where(Function(w) statuses.Contains(w.status))
End If
If Not dtFrom.DbSelectedDate Is Nothing Then
r = r.Where(Function(w) w.job.create_time >= dtDocFrom.DbSelectedDate)
End If
If Not dtTo.DbSelectedDate Is Nothing Then
r = r.Where(Function(w) w.job.create_time <= dtDocTo.DbSelectedDate)
End If
'a lot more IF conditions to add in additional predicates
grdResults.DataSource = r
grdResults.DataBind()
If I use any form of .Top or .Skip it throws an error : Query builder methods are not supported for LINQ to Entities queries
Is there any way to specify TOP or Limit using this method? I'd like to avoid a query returning 1000's of records if possible. (it's for a user search screen)
Rather than
r = new ObjectQuery<recipient>("recipients", dbContext)
try
r = dbContext.recipients.
.Skip() and .Take() return IOrderedQueriable<T> while .Where returns IQueriable<T>. Thus put the .Skip() and .Take() last.
Also change grdResults.DataSource = r to grdResults.DataSource = r.ToList() to execute the query now. That'll also allow you to temporarily wrap this line in try/catch, which may expose a better message about why it's erroring.
Mark this one down to confusion. I should have been using the .Take instead of .Top or .Limit or anything.
my final part is the below and it works :
grdResults = r.Take(100)

Creating SSAS 2008 cube partitions using Powershell?

How can we create SSAS 2008 cube partitions using Powershell?
This adds a partition to the Adventure Works DW 2008R2 cube (specifically the Internet Customers measure group in the Adventure Works cube):
$server_name = "localhost"
$catalog = "Adventure Works DW 2008R2"
$cube = "Adventure Works"
$measure_group = "Fact Internet Sales"
$old_partition = "Customers_2004"
$new_partition = "Customers_2009"
$old_text = "'2008"
$new_text = "'2009"
[Reflection.Assembly]::LoadFile("C:\Program Files\Microsoft SQL Server\100\SDK\Assemblies\Microsoft.AnalysisServices.DLL")
$srv = new-object Microsoft.AnalysisServices.Server
$srv.Connect("Data Source=" + $server_name)
$new_part = $srv.Databases[$catalog].Cubes[$cube].MeasureGroups[$measure_group].Partitions[$old_partition].Clone()
$new_part.ID = $new_partition
$new_part.Name = $new_partition
$new_part.Source.QueryDefinition = $new_part.Source.QueryDefinition.Replace($old_text, $new_text)
$srv.Databases[$catalog].Cubes[$cube].MeasureGroups[$measure_group].Partitions.Add($new_part)
$srv.Databases[$catalog].Cubes[$cube].MeasureGroups[$measure_group].Partitions[$new_partition].Update()
$srv.Databases[$catalog].Update()
$srv.Disconnect()
You'll have to change variables up top, and the reference to the Microsoft.AnalysisServices.dll assembly, but other than that, this will work peachy keen.
The trick is to call Update() on the object changed and then on the whole database itself.
If you'd like to process the new partition, as well, you can do that with the following line before $srv.Disconnect:
$srv.Databases[$catalog].Cubes[$cube].MeasureGroups[$measure_group].Partitions[$new_partition].Process()
You can learn more about Analysis Management Objects (AMO) here.
Check out this: PowerSSAS
It doesn't have explicit add partition support, so you'll probably have to craft an XMLA snippet to do the add partition and then use PowerSSAS to push it to the SSAS server.
you can use :
Microsoft.AnalysisServices.Deployment [ASdatabasefile]
{[/s[:logfile]] | [/a] | [[/o[:output_script_file]] [/d]]}
to deploy your cube AS with powershell.