Another MS CRM question from me, I'm afraid. I've got the following code being executed on the update of a contact record but it gives me an error saying the job was cancelled because it includes an infinite loop. Can anyone tell me why this is happening, please?
// <copyright file="PostContactUpdate.cs" company="">
// Copyright (c) 2013 All Rights Reserved
// </copyright>
// <author></author>
// <date>8/7/2013 2:04:26 PM</date>
// <summary>Implements the PostContactUpdate Plugin.</summary>
// <auto-generated>
// This code was generated by a tool.
// Runtime Version:4.0.30319.1
// </auto-generated>
namespace Plugins3Test
{
using System;
using System.ServiceModel;
using Microsoft.Xrm.Sdk;
using Microsoft.Xrm.Sdk.Query;
/// <summary>
/// PostContactUpdate Plugin.
/// Fires when the following attributes are updated:
/// All Attributes
/// </summary>
public class PostContactUpdate: Plugin
{
/// <summary>
/// Initializes a new instance of the <see cref="PostContactUpdate"/> class.
/// </summary>
public PostContactUpdate()
: base(typeof(PostContactUpdate))
{
base.RegisteredEvents.Add(new Tuple<int, string, string, Action<LocalPluginContext>>(40, "Update", "contact", new Action<LocalPluginContext>(ExecutePostContactUpdate)));
// Note : you can register for more events here if this plugin is not specific to an individual entity and message combination.
// You may also need to update your RegisterFile.crmregister plug-in registration file to reflect any change.
}
/// <summary>
/// Executes the plug-in.
/// </summary>
/// <param name="localContext">The <see cref="LocalPluginContext"/> which contains the
/// <see cref="IPluginExecutionContext"/>,
/// <see cref="IOrganizationService"/>
/// and <see cref="ITracingService"/>
/// </param>
/// <remarks>
/// For improved performance, Microsoft Dynamics CRM caches plug-in instances.
/// The plug-in's Execute method should be written to be stateless as the constructor
/// is not called for every invocation of the plug-in. Also, multiple system threads
/// could execute the plug-in at the same time. All per invocation state information
/// is stored in the context. This means that you should not use global variables in plug-ins.
/// </remarks>
protected void ExecutePostContactUpdate(LocalPluginContext localContext)
{
if (localContext == null)
{
throw new ArgumentNullException("localContext");
}
// TODO: Implement your custom Plug-in business logic.
// Obtain the execution context from the service provider.
IPluginExecutionContext context = localContext.PluginExecutionContext;
IOrganizationService service = localContext.OrganizationService;
IServiceProvider serviceProvider = localContext.ServiceProvider;
ITracingService tracingService = localContext.TracingService;
// Obtain the target entity from the input parmameters.
//Entity contextEntity = (Entity)context.InputParameters["Target"];
Entity targetEntity = null;
targetEntity = (Entity)context.InputParameters["Target"];
Guid cid = targetEntity.Id;
ColumnSet cols = new ColumnSet("jobtitle");
Entity contact = service.Retrieve("contact", cid, cols);
contact.Attributes["jobtitle"] = "Sometitle";
service.Update(contact);
}
}
}
it's happening because your plugin is executed when a contact is updated and the last line of your code update the contact again, this cause to call again the plugin ...
Then you have your infinite loop
You can prevent the loop using the IExecutionContext.Depth property
http://msdn.microsoft.com/en-us/library/microsoft.xrm.sdk.iexecutioncontext.depth.aspx
However if you explain your requirement I think it's possible to find a solution.
At first if IExecutionContext.Depth <= 1 seems like a great idea, but it can bite you if you have a different plugin that updates the contact. You should be using the SharedVariables of the plugin context.
Something like this should work:
Add this declaration to the plugin class as a class level field:
public static readonly Guid HasRunKey = new Guid("{6339dc20-01ce-4f2f-b4a1-0a1285b65bff}");
And add this as the first step of your plugin:
if(context.SharedVariables.ContainsKey[HasRunKey]){
return;
}else{
context.SharedVariables.Add(HasRunKey);
// Proceed with plugin execution
}
**I went through a lot of trial and error. I don't know why plugin context does not work but this works but the parentcontext works. This (workaround?) works :)
**
if (this.Context.ParentContext != null && this.Context.ParentContext.ParentContext != null)
{
var assemblyName = Assembly.GetExecutingAssembly().GetName().Name;
if (!this.Context.ParentContext.ParentContext.SharedVariables.Contains(assemblyName))
{
this.Context.ParentContext.ParentContext.SharedVariables.Add(assemblyName, true.ToString() );
}
else
{
// isRecursive = true;
return;
}
}
Your plugin is updating the "jobtitle" field, I'm not sure if this plugin is being triggered by all contact updates, or you have set some FilteringAttributes to it in the Registerfile.crmregister Plugin's definition. By excluding the "jobtitle" field from the attributes that trigger this plugin you can solve your issue.
Related
I need to create files for 45 separate locations (example: Boston, London, etc). And these file names have to be based on the date. Also can I provide a maximum file size to roll the files and the maximum number of files to roll.
Basically a file name must look like : Info_Boston_(2019.02.25).txt
So far I have come up with the below code to get by date. But I couldn't limit the file size to 1MB. The file grows beyond 1MB, and a new rolling file is not created. Please assist
<appender name="MyAppenderInfo" type="log4net.Appender.RollingFileAppender">
<param name="File" value="C:\\ProgramData\\Service\\Org\\Info"/>
<param name="RollingStyle" value="Date"/>
<param name="DatePattern" value="_(yyyy.MM.dd).\tx\t"/>
<param name="StaticLogFileName" value="false"/>
<maxSizeRollBackups value="10" />
<maximumFileSize value="1MB" />
<appendToFile value="true" />
<lockingModel type="log4net.Appender.FileAppender+MinimalLock" />
<layout type="log4net.Layout.PatternLayout">
<conversionPattern value="%date %message%n" />
</layout>
<filter type="log4net.Filter.LevelRangeFilter">
<levelMin value="DEBUG" />
<levelMax value="INFO" />
</filter>
</appender>
To address your specific post, I would not do this with a config based approach, as it would get rather cumbersome to manage I would think. A more programmatic approach would be to generate the logging instances dynamically.
EDIT: I took down the original to post this reworked example based on this SO post log4net: different logs on different file appenders at runtime
EDIT-2: I had to rework this again, as I realized I had omitted some required parts, and had some things wrong after the rework. This is tested and working. However, a few things to note, you will need to provide the using statements on the controller, to the logging class you make. next, you will need to DI your logging directories in as I have done, or come up with another method of providing the list of log file outputs.
This will allow you to very cleanly dynamically generate as many logging instances as you need to, to as many independent locations as you would like. I pulled this example from a project I did, and modified it a bit to fit your needs. Let me know if you have questions.
Create a Dynamic logger class which inherits from the base logger in the heirarchy:
using log4net;
using log4net.Repository.Hierarchy;
public sealed class DynamicLogger : Logger
{
private const string REPOSITORY_NAME = "somename";
internal DynamicLogger(string name) : base(name)
{
try
{
// try and find an existing repository
base.Hierarchy = (log4net.Repository.Hierarchy.Hierarchy)LogManager.GetRepository(REPOSITORY_NAME);
} // try
catch
{
// it doesnt exist, make it.
base.Hierarchy = (log4net.Repository.Hierarchy.Hierarchy)LogManager.CreateRepository(REPOSITORY_NAME);
} // catch
} // ctor(string)
} // DynamicLogger
then, build out a class to manage the logging instances, and build the new loggers:
using log4net;
using log4net.Appender;
using log4net.Config;
using log4net.Core;
using log4net.Filter;
using log4net.Layout;
using log4net.Repository;
using Microsoft.Extensions.Options;
using System.Collections.Generic;
using System.Linq;
public class LogFactory
{
private static List<ILog> _Loggers = new List<ILog>();
private static LoggingConfig _Settings;
private static ILoggerRepository _Repository;
public LogFactory(IOptions<LoggingConfig> configuration)
{
_Settings = configuration.Value;
ConfigureRepository(REPOSITORY_NAME);
} // ctor(IOptions<LoggingConfig>)
/// <summary>
/// Configures the primary logging repository.
/// </summary>
/// <param name="repositoryName">The name of the repository.</param>
private void ConfigureRepository(string repositoryName)
{
if(_Repository == null)
{
try
{
_Repository = LogManager.CreateRepository(repositoryName);
}
catch
{
// repository already exists.
_Repository = LogManager.GetRepository(repositoryName);
} // catch
} // if
} // ConfigureRepository(string)
/// <summary>
/// Gets a named logging instance, if it exists, and creates it if it doesnt.
/// </summary>
/// <param name="name"></param>
/// <returns></returns>
public ILog GetLogger(string name)
{
string filePath = string.Empty;
switch (name)
{
case "core":
filePath = _Settings.CoreLoggingDirectory;
break;
case "image":
filePath = _Settings.ImageProcessorLoggingDirectory;
break;
} // switch
if (_Loggers.SingleOrDefault(a => a.Logger.Name == name) == null)
{
BuildLogger(name, filePath);
} // if
return _Loggers.SingleOrDefault(a => a.Logger.Name == name);
} // GetLogger(string)
/// <summary>
/// Dynamically build a new logging instance.
/// </summary>
/// <param name="name">The name of the logger (Not file name)</param>
/// <param name="filePath">The file path you want to log to.</param>
/// <returns></returns>
private ILog BuildLogger(string name, string filePath)
{
// Create a new filter to include all logging levels, debug, info, error, etc.
var filter = new LevelMatchFilter();
filter.LevelToMatch = Level.All;
filter.ActivateOptions();
// Create a new pattern layout to determine the format of the log entry.
var pattern = new PatternLayout("%d %-5p %c %m%n");
pattern.ActivateOptions();
// Dynamic logger inherits from the hierarchy logger object, allowing us to create dynamically generated logging instances.
var logger = new DynamicLogger(name);
logger.Level = Level.All;
// Create a new rolling file appender
var rollingAppender = new RollingFileAppender();
// ensures it will not create a new file each time it is called.
rollingAppender.AppendToFile = true;
rollingAppender.Name = name;
rollingAppender.File = filePath;
rollingAppender.Layout = pattern;
rollingAppender.AddFilter(filter);
// allows us to dynamically generate the file name, ie C:\temp\log_{date}.log
rollingAppender.StaticLogFileName = false;
// ensures that the file extension is not lost in the renaming for the rolling file
rollingAppender.PreserveLogFileNameExtension = true;
rollingAppender.DatePattern = "yyyy-MM-dd";
rollingAppender.RollingStyle = RollingFileAppender.RollingMode.Date;
// must be called on all attached objects before the logger can use it.
rollingAppender.ActivateOptions();
logger.AddAppender(rollingAppender);
// Sets the logger to not inherit old appenders, or the core appender.
logger.Additivity = false;
// sets the loggers effective level, determining what level it will catch log requests for and log them appropriately.
logger.Level = Level.Info;
// ensures the new logger does not inherit the appenders of the previous loggers.
logger.Additivity = false;
// The very last thing that we need to do is tell the repository it is configured, so it can bind the values.
_Repository.Configured = true;
// bind the values.
BasicConfigurator.Configure(_Repository, rollingAppender);
LogImpl newLog = new LogImpl(logger);
_Loggers.Add(newLog);
return newLog;
} // BuildLogger(string, string)
} // LogFactory
Then, in your Dependency Injection you can inject your log factory. You can do that with something like this:
services.AddSingleton<LogFactory>();
Then in your controller, or any constructor really, you can just do something like this:
private LogFactory _LogFactory;
public HomeController(LogFactory logFactory){
_LogFactory = logFactory;
}
public async Task<IActionResult> Index()
{
ILog logger1 = _LogFactory.GetLogger("core");
ILog logger2 = _LogFactory.GetLogger("image");
logger1.Info("SomethingHappened on logger 1");
logger2.Info("SomethingHappened on logger 2");
return View();
}
This example will output:
2019-03-07 10:41:21,338 INFO core SomethingHappened on logger 1
in its own file called Core_2019-03-07.log
and also:
2019-03-07 11:06:29,155 INFO image SomethingHappened on logger 2
in its own file called Image_2019-03-07
Hope that makes more sense!
I am using UI Automation for GUI testing.
My window title contains the application name appended by a filename.
So, I want to specify Contains in my Name PropertyCondition.
I checked the overload but it is related to Ignoring the Case of the Name value.
Can anyone let me know how to specify Contains in my Name PropertyCondition?
Regards,
kvk938
I've tried solution of Max Young but could not wait for its completion. Probably my visual tree was too large, not sure. I've decided that it's my app and I should use knowledge of concrete element type I am searching for, in my case it was WPF TextBlock so I've made this:
public AutomationElement FindElementBySubstring(AutomationElement element, ControlType controlType, string searchTerm)
{
AutomationElementCollection textDescendants = element.FindAll(
TreeScope.Descendants,
new PropertyCondition(AutomationElement.ControlTypeProperty, controlType));
foreach (AutomationElement el in textDescendants)
{
if (el.Current.Name.Contains(searchTerm))
return el;
}
return null;
}
sample usage:
AutomationElement textElement = FindElementBySubstring(parentElement, ControlType.Text, "whatever");
and it worked fast.
As far as I know their is no way to do a contains while using the name property but you could do something like this.
/// <summary>
/// Returns the first automation element that is a child of the element you passed in and contains the string you passed in.
/// </summary>
public AutomationElement GetElementByName(AutomationElement aeElement, string sSearchTerm)
{
AutomationElement aeFirstChild = TreeWalker.RawViewWalker.GetFirstChild(aeElement);
AutomationElement aeSibling = null;
while ((aeSibling = TreeWalker.RawViewWalker.GetNextSibling(aeFirstChild)) != null)
{
if (aeSibling.Current.Name.Contains(sSearchTerm))
{
return aeSibling;
}
}
return aeSibling;
}
Then you would do this to get the desktop and pass the desktop with your string into the above method
/// <summary>
/// Finds the automation element for the desktop.
/// </summary>
/// <returns>Returns the automation element for the desktop.</returns>
public AutomationElement GetDesktop()
{
AutomationElement aeDesktop = AutomationElement.RootElement;
return aeDesktop;
}
Complete usage would look something like
AutomationElement oAutomationElement = GetElementByName(GetDesktop(), "Part of my apps name");
Why can't we Publish Events without any PayLoad.
_eventAggregator.GetEvent<SelectFolderEvent>().Publish(new SelectFolderEventCriteria() { });
Now, I don't need any pay load to be passed here. But the EventAggregator implementation mandates me to have an empty class to do that.
Event:
public class SelectFolderEvent : CompositePresentationEvent<SelectFolderEventCriteria>
{
}
PayLoad:
public class SelectFolderEventCriteria
{
}
Why has Prism not given a way to use just the Event and publish it like
_eventAggregator.GetEvent<SelectFolderEvent>().Publish();
Is it by design and I don't understand it?
Please explain. Thanks!
Good question, I don't see a reason for not publishing an event without a payload. There are cases where the fact that an event has been raised is all information you need and want to handle.
There are two options: As it is open source, you can take the Prism source and extract a CompositePresentation event that doesn't take a payload.
I wouldn't do that, but handle Prism as a 3rd party library and leave it as it is. It is good practice to write a Facade for a 3rd party library to fit it into your project, in this case for CompositePresentationEvent. This could look something like this:
public class EmptyPresentationEvent : EventBase
{
/// <summary>
/// Event which facade is for
/// </summary>
private readonly CompositePresentationEvent<object> _innerEvent;
/// <summary>
/// Dictionary which maps parameterless actions to wrapped
/// actions which take the ignored parameter
/// </summary>
private readonly Dictionary<Action, Action<object>> _subscriberActions;
public EmptyPresentationEvent()
{
_innerEvent = new CompositePresentationEvent<object>();
_subscriberActions = new Dictionary<Action, Action<object>>();
}
public void Publish()
{
_innerEvent.Publish(null);
}
public void Subscribe(Action action)
{
Action<object> wrappedAction = o => action();
_subscriberActions.Add(action, wrappedAction);
_innerEvent.Subscribe(wrappedAction);
}
public void Unsubscribe(Action action)
{
if (!_subscriberActions.ContainsKey(action)) return;
var wrappedActionToUnsubscribe = _subscriberActions[action];
_innerEvent.Unsubscribe(wrappedActionToUnsubscribe);
_subscriberActions.Remove(action);
}
}
If anything is unclear, please ask.
Just to update the situation since this question was asked/answered, as of Prism 6.2, empty payloads are now supported in Prism PubSubEvents.
If you're using an older version, this blog shows how to create an "Empty" class that clearly indicates the intent of the payload: https://blog.davidpadbury.com/2010/01/01/empty-type-parameters/
I am looking to design some logic inside of my create plugin for the entity 'account'.
What it does is basically check account Names and identifies account names which are duplicates on creation.
So if there is an account Name, Barclays for example, and I try to create this again I'm going to alert the user with an error message that this has been created before and prevents this record from being added.
public void Execute(IServiceProvider serviceProvider)
{
var context = (IPluginExecutionContext)serviceProvider.GetService(typeof(Microsoft.Xrm.Sdk.IPluginExecutionContext));
if (context.InputParameters.Contains("Target") &&
context.InputParameters["Target"] is Entity)
{
// Obtain the target entity from the input parmameters.
Entity entity = (Entity)context.InputParameters["Target"];
if (entity.LogicalName == "account")
{
bool x = true;
if (entity.Attributes.Contains("Name") != recordNamesinCRM)
{
}
else
{
throw new InvalidPluginExecutionException("You Cannot Have Duplicate Country Codes!.");
}
}
}
}
In the code above I am just using "recordNamesinCRM" as an example but I'm sure there is a built in function or way of comparing on create a new name with the rest in the system or a way of counting reoccurring instances.
You can use the RetrieveDuplicatesRequest as per this example here:
/// <summary>
/// Checks for duplicate Guid
/// </summary>
/// <param name="account"></param>
/// <returns>First duplicate account id, if any duplicates found, and Guid.Empty if not</returns>
public Guid DuplicateExists(Account account)
{
RetrieveDuplicatesRequest request = new RetrieveDuplicatesRequest();
request.BusinessEntity = account;
request.MatchingEntityName = Account.EntityLogicalName;
request.PagingInfo = new PagingInfo();
request.PagingInfo.PageNumber = 1;
request.PagingInfo.Count = 1;
RetrieveDuplicatesResponse response = (RetrieveDuplicatesResponse)ServiceProxy.Execute(request);
return response.DuplicateCollection.Entities.Count > 0 ? response.DuplicateCollection.Entities[0].Id : Guid.Empty;
}
See http://crm-edinburgh.com/2011/08/crm-sdk-using-detect-duplicates-settings-in-code/ for an example.
Are you aware of the built-in duplicate detection?
See following links:
http://blogs.msdn.com/b/crm/archive/2008/01/17/crm-4-system-wide-duplicate-detection.aspx
http://msdn.microsoft.com/en-us/library/dd393304.aspx
http://www.youtube.com/watch?v=X8vPcV6jyLg
although the links describe the duplicate detection of Dynamics CRM 4, they are still valid for Dynamics CRM 2011
Take a look at the article Run Duplicate Detection in the Dynamics CRM 2011 SDK.
You could either use the optional parameter SuppressDuplicateDetection or you could use the RetrieveDuplicatesRequest, although this will only work for existing records.
Wondering if anybody out there has any success in using the JDEdwards XMLInterop functionality. I've been using it for a while (with a simple PInvoke, will post code later). I'm looking to see if there's a better and/or more robust way.
Thanks.
As promised, here is the code for integrating with JDEdewards using XML. It's a webservice, but could be used as you see fit.
namespace YourNameSpace
{
/// <summary>
/// This webservice allows you to submit JDE XML CallObject requests via a c# webservice
/// </summary>
[WebService(Namespace = "http://WebSite.com/")]
[WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)]
public class JdeBFService : System.Web.Services.WebService
{
private string _strServerName;
private UInt16 _intServerPort;
private Int16 _intServerTimeout;
public JdeBFService()
{
// Load JDE ServerName, Port, & Connection Timeout from the Web.config file.
_strServerName = ConfigurationManager.AppSettings["JdeServerName"];
_intServerPort = Convert.ToUInt16(ConfigurationManager.AppSettings["JdePort"], CultureInfo.InvariantCulture);
_intServerTimeout = Convert.ToInt16(ConfigurationManager.AppSettings["JdeTimeout"], CultureInfo.InvariantCulture);
}
/// <summary>
/// This webmethod allows you to submit an XML formatted jdeRequest document
/// that will call any Master Business Function referenced in the XML document
/// and return a response.
/// </summary>
/// <param name="Xml"> The jdeRequest XML document </param>
[WebMethod]
public XmlDocument JdeXmlRequest(XmlDocument xmlInput)
{
try
{
string outputXml = string.Empty;
outputXml = NativeMethods.JdeXmlRequest(xmlInput, _strServerName, _intServerPort, _intServerTimeout);
XmlDocument outputXmlDoc = new XmlDocument();
outputXmlDoc.LoadXml(outputXml);
return outputXmlDoc;
}
catch (Exception ex)
{
ErrorReporting.SendEmail(ex);
throw;
}
}
}
/// <summary>
/// This interop class uses pinvoke to call the JDE C++ dll. It only has one static function.
/// </summary>
/// <remarks>
/// This class calls the xmlinterop.dll which can be found in the B9/system/bin32 directory.
/// Copy the dll to the webservice project's /bin directory before running the project.
/// </remarks>
internal static class NativeMethods
{
[DllImport("xmlinterop.dll",
EntryPoint = "_jdeXMLRequest#20",
CharSet = CharSet.Auto,
ExactSpelling = false,
CallingConvention = CallingConvention.StdCall,
SetLastError = true)]
private static extern IntPtr jdeXMLRequest([MarshalAs(UnmanagedType.LPWStr)] StringBuilder server, UInt16 port, Int32 timeout, [MarshalAs(UnmanagedType.LPStr)] StringBuilder buf, Int32 length);
public static string JdeXmlRequest(XmlDocument xmlInput, string strServerName, UInt16 intPort, Int32 intTimeout)
{
StringBuilder sbServerName = new StringBuilder(strServerName);
StringBuilder sbXML = new StringBuilder();
XmlWriter xWriter = XmlWriter.Create(sbXML);
xmlInput.WriteTo(xWriter);
xWriter.Close();
string result = Marshal.PtrToStringAnsi(jdeXMLRequest(sbServerName, intPort, intTimeout, sbXML, sbXML.Length));
return result;
}
}
}
You have to send it messages like the following one:
<jdeRequest type='callmethod' user='USER' pwd='PWD' environment='ENV'>
<callMethod name='GetEffectiveAddress' app='JdeWebRequest' runOnError='no'>
<params>
<param name='mnAddressNumber'>10000</param>
</params>
</callMethod>
</jdeRequest>
To anyone trying to do this, there are some dependencies to xmlinterop.dll.
you'll find these files on the fat client here ->c:\E910\system\bin32
this will create a 'thin client'
PSThread.dll
icudt32.dll
icui18n.dll
icuuc.dll
jdel.dll
jdeunicode.dll
libeay32.dll
msvcp71.dll
ssleay32.dll
ustdio.dll
xmlinterop.dll
I changed our JDE web service to use XML Interop after seeing this code, and we haven't had any stability problems since. Previously we were using the COM Connector, which exhibited regular communication failures (possibly a connection pooling issue?) and was a pain to install and configure correctly.
We did have issues when we attempted to use transactions, but if you're doing simple single business function calls this shouldn't be an problem.
Update: To elaborate on the transaction issues - if you're attempting to keep a transaction alive over multiple calls, AND the JDE application server is handling a modest number of concurrent calls, the xmlinterop calls start returning an 'XML response failed' message and the DB transaction is left open with no way to commit or rollback. It's possible tweaking the number of kernels might solve this, but personally, I'd always try to complete the transaction in a single call.