Not allowed to use reflection in asp.net mvc2 running on a secure server? - asp.net-mvc-2

I'm trying to deploy a web site on a secure server, but I'm having problems with reflection.
I guess this have something to do with security, but I'm not quite sure.
The error occurs when doing Assembly.GetTypes()
ReflectionTypeLoadException: Unable to load one or more of the requested types. Retrieve the LoaderExceptions property for more information.]
System.Reflection.Module._GetTypesInternal(StackCrawlMark& stackMark) +0
System.Reflection.Assembly.GetTypes() +105
If I just ignore these exceptions and continue to load from assemblies, nothing is loaded. Seems like I'm not allowed to do this for any assemblies at all. When I run in a non secure webserver, everything works ok. I don't know much about IIS, mvc or .net security policies, so any help in the right direction would be greatly appreciated.
Heres the code causing the exception:
private static IEnumerable<IModule> GetModules()
{
foreach (var assembly in AppDomain.CurrentDomain.GetAssemblies())
{
foreach (var type in assembly.GetTypes()) // <--- This one throws
{
var moduleType = type.GetInterface(typeof(IModule).Name);
if (moduleType != null)
{
IModule module = null;
try
{
module = (IModule)Activator.CreateInstance(type, null);
}
catch (ReflectionTypeLoadException ex)
{
// GetInterface() get's all interfaces with the same
// name, so we'll just skip those who isn't ours
logger.Warn("Could not load module", ex);
}
if( module != null)
yield return module;
}
}
}
}

I found the error. It's not related to HTTP/HTTPS at all.
This post helped me on track http://forums.asp.net/t/1196710.aspx
Some assemblies in a dependant module didn't get copied to the output folder. My development machine and the test server has these in GAC, so everything works there.
I added these components explicitly, and now everything works.
Thanks for your time

Related

"ERROR: 57014: canceling statement due to user request" Npgsql

I am having this phantom problem in my application where one in every 5 request on a specific page (on an ASP.NET MVC application) throws this error:
Npgsql.NpgsqlException: ERROR: 57014: canceling statement due to user request
at Npgsql.NpgsqlState.<ProcessBackendResponses>d__0.MoveNext()
at Npgsql.ForwardsOnlyDataReader.GetNextResponseObject(Boolean cleanup)
at Npgsql.ForwardsOnlyDataReader.GetNextRow(Boolean clearPending)
at Npgsql.ForwardsOnlyDataReader.Read()
at Npgsql.NpgsqlCommand.GetReader(CommandBehavior cb)
...
On the npgsql github page I found the following bug report: 615
It says there:
Regardless of what exactly is happening with Dapper, there's
definitely a race condition when cancelling commands. Part of this is
by design, because of PostgreSQL: cancel requests are totally
"asynchronous" (they're delivered via an unrelated socket, not as part
of the connection to be cancelled), and you can't restrict the
cancellation to take effect only on a specific command. In other
words, if you want to cancel command A, by the time your cancellation
is delivered command B may already be in progress and it will be
cancelled instead.
Although they have made "changes to hopefully make cancellations much safer" in Npgsql 3.0.2 my current code is incompatible with this version because the need of migration described here.
My current workaround (stupid): I have commented the code in Dapper that says command.Cancel(); and the problem seems to be gone.
if (reader != null)
{
if (!reader.IsClosed && command != null)
{
//command.Cancel();
}
reader.Dispose();
reader = null;
}
Is there a better solution to the problem? And secondly what am I loosing with the current fix (except that I have to remember the change every time I update Dapper)?
Configuration:
NET45,
Npgsql 2.2.5,
Postgresql 9.3
I found why my code didn't dispose the reader, resulting in calling command.Cancel(). This only happens with QueryMultiple method when not every refcursor is read.
Changing the code from:
using (var multipleResults = connection.QueryMultiple("schema.getuserbysocialsecurity", new { socialSecurityNumber }))
{
var client = multipleResults.Read<Client>().SingleOrDefault();
if (client != null)
{
client.Address = multipleResults.Read<Address>().Single();
}
return client;
}
To:
using (var multipleResults = connection.QueryMultiple("schema.getuserbysocialsecurity", new { socialSecurityNumber }))
{
var client = multipleResults.Read<Client>().SingleOrDefault();
var address = multipleResults.Read<Address>().SingleOrDefault();
if (client != null)
{
client.Address = address;
}
return client;
}
This fixed the issue and now the reader is properly disposed and command.Cancel() is not invoked.
Hope this helps anyone else!
UPDATE
The npgsql docs for version 2.2 states:
Npgsql is able to ask the server to cancel commands in progress. To do
this, call the NpgsqlCommand’s Cancel method. Note that another thread
must handle the request as the main thread will be blocked waiting for
command to finish. Also, the main thread will raise an exception as a
result of user cancellation. (The error code is 57014.)
I have also posted an issue on the Dapper github page.

Map SignalR Hub after using MEF to load plugin

I'm trying to have signalR hub as part of a plugin using MEF. But after calling ImportMany on a List<> object and then adding the catalog/container/ComposeParts part in the Application_Start() method of the Global.asax file, all I get is :
Uncaught TypeError: Cannot read property 'server' of undefined.
I've got no clue if the problem comes from my interface, the plugin, the global.asax file, or the javascript.
The interface:
public interface IPlugin
{
}
the plugin:
[Export(typeof(IPlugin))]
[HubName("testHub")]
public class TestHub : Hub, IPlugin
{
public string Message()
{
return "Hello World!";
}
}
in the Global.asax file:
[ImportMany(typeof (IPlugin))]
private IEnumerable<IPlugin> _plugins { get; set; }
protected void Application_Start()
{
var catalog = new AggregateCatalog();
catalog.Catalogs.Add(new DirectoryCatalog(#"./Plugins"));
var container = new CompositionContainer(catalog);
container.ComposeParts(this);
RouteTable.Routes.MapHubs();
//log4net
log4net.Config.XmlConfigurator.Configure();
AreaRegistration.RegisterAllAreas();
WebApiConfig.Register(GlobalConfiguration.Configuration);
FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters);
RouteConfig.RegisterRoutes(RouteTable.Routes);
}
and finally the javascript:
$(document).ready(function () {
$.connection.hub.url = 'http://127.0.0.1/signalr/';
var proxy = $.connection.testHub;
$.connection.hub.start({ transport: ['webSockets', 'serverSentEvents', 'longPolling'] })
.done(function () {
proxy.invoke('Message').done(function(res) {
alert(res);
});
})
.fail(function () { alert("Could not Connect!"); });
});
the only information I've found was this post but I could not make it work. everything works fine when I add the reference manually, but when I have a look at "signalr/hubs" after loading the plugin, then there is not reference to my hub's method.
Thanks a lot for your help.
Your problem is that SignalR caches the generated "signalr/hubs" proxy script the first time it is requested. SignalR provides the cached script in response every subsequent request to "signalr/hubs".
SignalR not only caches the script itself, but it also caches the collection of Hubs it finds at the start of the process.
You can work around the cached proxy script issue by simply not using the proxy script, but that still won't enable you to actually connect to Hubs defined in assemblies that are loaded after the process starts.
If you want to be able to connect to such Hubs, you will need to implement your own IHubDescriptorProvider that is aware of Hubs defined in plugins loaded at runtime.
You can register your provider with SignalR's DependencyResolver which can be passed into SignalR via the Resolver property of the HubConfiguration object you pass into MapSignalR.
That said, it would probably be easier to restart the app pool/server process whenever a plugin is added to the "./Plugins" directory.

Self Signed Applet Can it access Local File Systems

Hi I have created a Self Signed Applet , but not able to access local files system .What have i to do ?
you need to wrap your IO code inside PrivilegedAction.
Generally, you need to sign your applet with your test certificate, the user will see a warning and will have to accept the certificate when it loads the applet.
then you need to wrap your code inside a PriviligedAction. see this for some examples.
The below code is use to Add a Bouncy Castle Jar, the same way you can use it for accessing the file. AccessController java api is used.
AccessController.doPrivileged(new PrivilegedAction() {
public Object run() {
try{
Security.addProvider(new org.bouncycastle.jce.provider.BouncyCastleProvider()); // Here you can write the code for File Accesss
}catch (Exception e) {
return "";
}
return "";
}
});

Properly disposing resources used by SmtpClient

I have a C# service that runs continuously with user credentials (i.e not as localsystem - I can't change this though I want to). For the most part the service seems to run ok, but ever so often it bombs out and restarts for no apparent reason (servicer manager is set to restart service on crash).
I am doing substantial event logging, and I have a layered approach to Exception handling that I believe makes at least some sort of sense:
Essentially I got the top level generic exception, null exception and startup exception handlers.
Then I got various handlers at the "command level" (i.e specific actions that the service runs)
Finally I handle a few exceptions handled at the class level
I have been looking at whether any resources aren't properly released, and I am starting to suspect my mailing code (send email). I noticed that I was not calling Dispose for the MailMessage object, and I have now rewritten the SendMail code as illustrated below.
The basic question is:
will this code properly release all resources used to send mails?
I don't see a way to dispose of the SmtpClient object?
(for the record: I am not using object initializer to make the sample easier to read)
private static void SendMail(string subject, string html)
{
try
{
using ( var m = new MailMessage() )
{
m.From = new MailAddress("service#company.com");
m.To.Add("user#company.com");
m.Priority = MailPriority.Normal;
m.IsBodyHtml = true;
m.Subject = subject;
m.Body = html;
var smtp = new SmtpClient("mailhost");
smtp.Send(m);
}
}
catch (Exception ex)
{
throw new MyMailException("Mail error.", ex);
}
}
I know this question is pre .Net 4 but version 4 now supports a Dispose method that properly sends a quit to the smpt server. See the msdn reference and a newer stackoverflow question.
There are documented issues with the SmtpClient class. I recommend buying a third party control since they aren't too expensive. Chilkat makes a decent one.

How can I prevent unauthorized code from accessing my assembly in .NET 2.0?

In .NET 1.x, you could use the StrongNameIdentityPermissionAttribute on your assembly to ensure that only code signed by you could access your assembly. According to the MSDN documentation,
In the .NET Framework version 2.0 and later, demands for identity
permissions are ineffective if the calling assembly has full trust.
This means that any application with full trust can just bypass my security demands.
How can I prevent unauthorized code from accessing my assembly in .NET 2.0?
As per Eric's suggestion, I solved it by checking the key myself. In the code I want to protect, I add the following call,
EnsureAssemblyIsSignedByMyCompany( Assembly.GetCallingAssembly() );
Then the implementation of that method is
/// <summary>
/// Ensures that the given assembly is signed by My Company or Microsoft.
/// </summary>
/// <param name="assembly"></param>
private static void EnsureAssemblyIsSignedByMyCompany( Assembly assembly )
{
if ( assembly == null )
throw new ArgumentNullException( "assembly" );
byte[] pubkey = assembly.GetName().GetPublicKeyToken();
if ( pubkey.Length == 0 )
throw new ArgumentException( "No public key token in assembly." );
StringBuilder builder = new StringBuilder();
foreach ( byte b in pubkey )
{
builder.AppendFormat( "{0:x2}", b );
}
string pkString = builder.ToString();
if ( pkString != "b77a5c561934e089" /* Microsoft */ &&
pkString != "abababababababab" /* Ivara */ )
{
throw new ArgumentException( "Assembly is not signed by My Company or Microsoft. You do not have permission to call this code." );
}
}
** Names and keys changed to protect the innocent. Any likeness to real names or companies is merely a coincidence.*
See this article:
http://blogs.msdn.com/ericlippert/archive/2008/10/06/preventing-third-party-derivation-part-two.aspx
Particularly this part:
In recent versions of .NET, "full trust means full trust". That is, fully-trusted code satisfies all demands, including demands for things like "was signed with this key", whether it actually was signed or not.
Isn't that a deadly flaw in the security system? No. Fully trusted code always had the ability to do that, because fully trusted code has the ability to control the evidence associated with a given assembly. If you can control the evidence,then you can forge an assembly that looks like it came from Microsoft, no problem. (And if you already have malicious full-trust code in your process then you have already lost -- it doesn't need to impersonate Microsoft-signed assemblies; it already has the power to do whatever the user can do.)
Apparently, the .Net designers felt that this attribute wasn't very effective for full trust code in .Net 1.x either.
As Joel indicated, you are out of luck with regard to CAS. However, you may be able to do the check yourself in any method you need to protect by using Assembly.GetCallingAssembly() to get a reference to the assembly containing the calling code, then check the strong name on that assembly manually.