How to configure a routing problem with a custom decision builder? - or-tools

EDIT: this question is for people that know or-tools and in particular the routing library:
https://developers.google.com/optimization
https://developers.google.com/optimization/routing
I want to increase my understanding of the routing library. I have read the or-tools manual. I would like to pass a decision builder to the solver. The decision builder should decide how nextVar assignments are made.
Here is my attempt:
java:
indexDepot = 0;
numberStops = 100;
numberVehicles = 8;
indexManager = new RoutingIndexManager(numberStops, numberVehicles, indexDepot);
RoutingModelParameters modelParameters = parameters.getRoutingModelParameters();
model = new RoutingModel(indexManager, modelParameters);
Solver cpsolver = model.solver();
IntVar[] nextVars = new IntVar[numberStops];
for (int i=1; i<numberStops; i++) {
nextVars[i] = model.nextVar(i);
}
DecisionBuilder db = cpsolver.makePhase(nextVars, cpsolver.CHOOSE_RANDOM, cpsolver.ASSIGN_RANDOM_VALUE);
cpsolver.newSearch(db);
It seems that the cpsolver.newSearch(db); does not have any effect. How do I correctly pass decision builders to the routing model?
Here is the part of the manual about search primitives:
https://acrogenesis.com/or-tools/documentation/user_manual/manual/search_primitives/out_of_the_box_search_primitives.html

Related

Implementing table insertion into Word using Office for JS (officejs)

Working on a project aiming to convert some C# based Word Add-in into JavaScript using officejs.
Quite surprised that after 5 years of development officejs API is not quite there in terms of coverage compared to C# API.
I am struggling how to translate the following C# Word API code into JavaScript based API. Seems that a lot of functionality is simply not there.
How can the following code be converted to Javascript to achieve the same level of end state functionality?
MsWord.Table tblchart = rng.Tables.Add(rng, NumRows: 1, NumColumns: 1,
AutoFitBehavior: MsWord.WdAutoFitBehavior.wdAutoFitFixed);
tblchart.AllowPageBreaks = false;
tblchart.Borders.OutsideLineStyle = WdLineStyle.wdLineStyleNone;
tblchart.Borders.InsideLineStyle = WdLineStyle.wdLineStyleNone;
tblchart.Borders.Shadow = false;
tblchart.TopPadding = tblchart.BottomPadding = tblchart.LeftPadding = 0f;
tblchart.RightPadding = application.InchesToPoints(0.02f);
tblchart.PreferredWidthType = MsWord.WdPreferredWidthType.wdPreferredWidthPoints;
tblchart.set_Style(WordStyles.Table);
tblchart.Range.set_Style(WordStyles.TableMaster);
tblchart.Rows.WrapAroundText = System.Convert.ToInt32(false);
tblchart.PreferredWidth = _Imagewidth;
tblchart.Descr = _description;
tblchart.Rows[1].AllowBreakAcrossPages = 0;
tblchart.Rows[1].Range.set_Style(WordStyles.Figure);
tblchart.Rows[1].Range.Text = "";
tblchart.Rows.SetLeftIndent(LeftIndent: application.InchesToPoints(_leftIndent), RulerStyle: WdRulerStyle.wdAdjustNone);

Unable to add data in archive table in Entity Framework

I wrote the code to update my table (SecurityQuestionAnswer) with new security password questions and move to old questions to another table (SecurityQuestionAnswersArchives). Total no of security questions is 3. I am able to update the current table, but when I add the same rows to history table, it shows weird data: only two records are added instead of 3 and the data is also duplicated. My code is as follows:
if (oldQuestions.Any())
{
var oldquestionstoarchivelist = new List<SecurityQuestionAnswersArchives>();
var oldquestionstoarchive =new SecurityQuestionAnswersArchives();
for (int i = 0; i < 3; i++)
{
oldquestionstoarchive.Id = oldQuestions[i].Id;
oldquestionstoarchive.SecurityQuestionId = oldQuestions[i].SecurityQuestionId;
oldquestionstoarchive.Answer = oldQuestions[i].Answer;
oldquestionstoarchive.UpdateDate = oldQuestions[i].UpdateDate;
oldquestionstoarchive.IpAddress = oldQuestions[i].IpAddress;
oldquestionstoarchive.SecurityQuestion = oldQuestions[i].SecurityQuestion;
oldquestionstoarchive.User = oldQuestions[i].User;
oldquestionstoarchivelist.Add(oldquestionstoarchive);
}
user.SecurityQuestionAnswersArchives = oldquestionstoarchivelist;
//await Store.UpdateAsync(user);
_dbContext.ArchiveSecurityQuestionAnswers.AddRange(oldquestionstoarchivelist);
_dbContext.SecurityQuestionAnswers.RemoveRange(oldQuestions);
await _dbContext.SaveChangesAsync();
oldquestionstoarchivelist.Clear();
}
UPDATE 1
The loop looks fine, It iterates three times(0,1,2), which is expected. First issue is with AddRange function to which I was passing a list , but it takes an IEnumerable input, I rectified it using following code.
IEnumerable<SecurityQuestionAnswersArchives> finalArchiveses = oldquestionstoarchivelist;
_dbContext.ArchiveSecurityQuestionAnswers.AddRange(finalArchiveses);
The other issue is duplicate data , which I am unable to figure out where the issue is. Please help me in finding this out.
Your help is much appreciated !
Got it ! Just sharing in case anybody has same issue.
The problem was with initialization at wrong place. I moved
var oldquestionstoarchive =new SecurityQuestionAnswersArchives();
in side the Forloop, now the variable will hold the unique values over each iteration.
var oldquestionstoarchivelist = new List<SecurityQuestionAnswersArchives>();
for (int i = 0; i < 3; i++)
{
var oldquestionstoarchive = new SecurityQuestionAnswersArchives();
oldquestionstoarchive.SecurityQuestionId = oldQuestions[i].SecurityQuestionId;
oldquestionstoarchive.Answer = oldQuestions[i].Answer;
oldquestionstoarchive.UpdateDate = oldQuestions[i].UpdateDate;
oldquestionstoarchive.IpAddress = oldQuestions[i].IpAddress;
oldquestionstoarchive.SecurityQuestion = oldQuestions[i].SecurityQuestion;
oldquestionstoarchive.User = oldQuestions[i].User;
oldquestionstoarchivelist.Add(oldquestionstoarchive);
}

Dynamic Field Name in dynamic type in ASP.NET MVC

I am using Entity Framework and my table structure is as below.
Key, dec2000, dec2001, dec2002,... dec2020
so, while update in loop if for all columns in i have to specify the column name like,
for (int j = 0; j < intEndYear - intStartYear; j++)
{
// Find value from FormCollection
// Assign it to object
OBJECT.DEC2000 = 1;
}
so, one way is i have to check for year and make condition check whether year = 2000 then use dec2000 field and so on.
So i tried to use Dynamic type so i can specify field name dynamically like
dynamic objLevel = objDetailSum.GetSingle(parameters);
so, while updating i am trying to do like
// trying to make field name
string stryear = "DEC" + year.ToString();
objLevel.stryear = 1; // should read objLevel.dec2000 = 1;
I know this will not work as i don't have strYear column in my table.
Question : Is there any other good way to handle this situation? so i don't have to check for each and every column and can use dynamic instead/.
Appreciate any direction.
You could use Reflection. See here for more information
//assuming "myInstance" is an instance of the EF class:
string strYear = "DEC"+year.ToString();
var pi =myInstance.GetType().GetProperty(styYear);
if (pi!=null)
{
pi.SetValue(MyInstance,1);
}

UnitTesting with Moles for tightly coupled 3rd party DLL

I'm new to PEX and Moles just wondering how to UnitTest with moles something like below.
I just need a UniDynArray to test. But Creating a UniDynArray depends on the UniSession and UniSession Depends on UniObjects.OpenConnection. When i run this code i get error from ThirdParty dll saying sessions aren't open
Please help
[TestMethod, HostType("Moles")]
public void Constructor2WithMoles()
{
using (MolesContext.Create())
{
//Should I make the Third party session like this ???
MUniSession msession = new MUniSession();
//Here What Actually Happens in the code is uniObject opensession return session
// UniSession session = UniObjects.OpenSession(a,b,c,d); How should i do this
//???? MUniObjects.OpenSessionStringStringStringString = (a, b, c, d) => msession;
MUniDynArray mdata = new MUniDynArray();
mdata.InsertInt32Int32String = (column, index, strValue) =>
{
column = 1;
index = 1;
strValue = "Personal Auto";
};
mdata.InsertInt32Int32String = (column, index, strValue) =>
{
column = 2;
index = 1;
strValue = "1.1";
};
mdata.InsertInt32Int32String = (column, index, strValue) =>
{
column = 3;
index = 1;
strValue = "05/05/2005";
};
mdata.InsertInt32Int32String = (column, index, strValue) =>
{
column = 4;
index = 1;
strValue = "Some Description";
};
mdata.InsertInt32Int32String = (column, index, strValue) =>
{
column = 5;
index = 1;
strValue = "20";
};
mdata.InsertInt32Int32String = (column, index, strValue) =>
{
column = 6;
index = 1;
strValue = "1";
};
History target = new History(mdata, 1);
Assert.AreEqual<string>("Some Description", target.Description);
}
// TODO: CREATE Mole asserts
}
First, I strongly recommend waiting to use Fakes Stub and Shim types, in .NET 4.5 and Visual Studio 2012 (releasing 12 SEP 2012). Fakes is the productized version of Moles. To start using Fakes, download the free Ultimate version of VS2012 Release Candidate.
To answer your question, you should isolate your 3rd party dependency, using the Dependency Injection design pattern. Many users of Shim/Mole types overlook implementing the Dependency Injection pattern -- use Shim/Mole types, only when no other options remain. I assume the 3rd party library exposes interfaces. Fakes/Moles automatically converts interfaces to Stub types that can be used for tests. You'll need to create concrete stubs from the interfaces, for the production code to use. These stubs are simply a wrapper for the targeted 3rd party library type. Look up any article on Dependency Injection, for details on how to implement the pattern -- this pattern is quick and easy to implement, especially when using a refactoring tool, such as Resharper or Refactor!Pro.
Stub type method calls are detoured, using the same lambda/delegate syntax as Shim/Mole types. Stub types are good for generating stubs unique to a single or small number of tests. Otherwise, creating a concrete test stub for use in several tests is the best option.

Devart Oracle Entity Framework 4.1 performance

I want to know why Code fragment 1 is faster than Code 2 using POCO's with Devart DotConnect for Oracle.
I tried it over 100000 records and Code 1 is way faster than 2. Why? I thought "SaveChanges" would clear the buffer making it faster as there is only 1 connection. Am I wrong?
Code 1:
for (var i = 0; i < 100000; i++)
{
using (var ctx = new MyDbContext())
{
MyObj obj = new MyObj();
obj.Id = i;
obj.Name = "Foo " + i;
ctx.MyObjects.Add(obj);
ctx.SaveChanges();
}
}
Code 2:
using (var ctx = new MyDbContext())
{
for (var i = 0; i < 100000; i++)
{
MyObj obj = new MyObj();
obj.Id = i;
obj.Name = "Foo " + i;
ctx.MyObjects.Add(obj);
ctx.SaveChanges();
}
}
The first code snippet works faster as the same connection is taken from the pool every time, so there are no performance losses on its re-opening.
In the second case 100000 objects gradually are added to the context. A slow snapshot-based tracking is used (if no dynamic proxy). This leads to the detection if any changes in any of cached objects occured on each SaveChanges(). More and more time is spent by each subsequent iteration.
We recommend you to try the following approach. It should have a better performance than the mentioned ones:
using (var ctx = new MyDbContext())
{
for (var i = 0; i < 100000; i++)
{
MyObj obj = new MyObj();
obj.Id = i;
obj.Name = "Foo " + i;
ctx.MyObjects.Add(obj);
}
ctx.SaveChanges();
}
EDIT
If you use an approach with executing large number of operations within one SaveChanges(), it will be useful to configure additionally the Entity Framework behaviour of Devart dotConnect for Oracle provider:
// Turn on the Batch Updates mode:
var config = OracleEntityProviderConfig.Instance;
config.DmlOptions.BatchUpdates.Enabled = true;
// If necessary, enable the mode of re-using parameters with the same values:
config.DmlOptions.ReuseParameters = true;
// If object has a lot of nullable properties, and significant part of them are not set (i.e., nulls), omitting explicit insert of NULL-values will decrease greatly the size of generated SQL:
config.DmlOptions.InsertNullBehaviour = InsertNullBehaviour.Omit;
Only some options are mentioned here. The full list of them is available in our article:
http://www.devart.com/blogs/dotconnect/index.php/new-features-of-entity-framework-support-in-dotconnect-providers.html
Am I wrong to assume that when SaveChanges() is called, all the
objects in cache are stored to DB and the cache is cleared, so each
loop is independent?
SaveChanges() sends and commits all changes to database, but change tracking is continued for all entities which are attached to the context. And new SaveChanges, if snapshot-based change tracking is used, will start a long process of checking (changed or not?) the values of each property for each object.