It seems I am running into the Microsoft Distributed Transaction Coordinator (MSDTC) related issue.
SCENARIO
I am using TransactionScope and with in the single scope it hits two different databases on different servers (for instance, DB_A running Windows Server 2003 and DB_B running Windows Server 2008). One database is accessed using Entity Framework 4.0 and another using normal ADO.NET APIs.
When I run the application from my development machine (running WinXP) it commits and rollbacks both the connections accurately. But when I run the application, deployed on another server (for instance WAS_A running Windows Server 2003) it commits correctly but in case of exception is doesn't roll back the database activities on both the servers.
I thought it would be the MSDTC configuration issue on the WAS_A. So I went to the MSDTC -> Security Configuration and checked all the available options (as I did previously on other machines). But still I am facing the same issue.
Looking for your expert advices. :)
I believe that you need to look into Enabling Transaction Flow. Specifically, take a look at how one may error and the other complete as described in TransactionScope and WCF Services:
an error in a second WCF service call was NOT rolling back the changes made in a previous WCF service call...
In order to create an ambient transaction in your client and ensure that it is used by your WCF services...
The article then details the following steps:
Configure Your Binding with transactionFlow
Decorate Your Interface with [TransactionFlow(TransactionFlowOption)]
Decorate Your Method with [OperationBehavior(TransactionScopeRequired)]
Optionally update your Connection Strings with Transaction Binding*
*Note: This is optional in my opinion.
Related
This question is purposefully general because I'm trying to understand things more from an architectural perspective, because that will impact which group I need to contact. My team is using Azure DevOps (cloud) with on-prem build agents. The agents connect to ADO via a proxy.
We use several tools in-house provided by vendors with ADO plugins in the Marketplace that require us to set up service connections. Because the services are installed on-prem, the endpoints we enter are not available via the Web (e.g. https://vendor-product.my-company.com).
If I log into the build machine and open up IE, I am able to connect to the service endpoint URL. However, whenever I try to run a task from ADO, it fails with some kind of connection-related issue ("The underlying connection was closed: An unexpected error occurred on a send", "Task ended with an exception: Error: read ECONNRESET", etc.).
The way I thought it worked, all the work takes place on the build machine itself, so the calls would be going from my-build-server.my-company.com to https://vendor-product.my-company.com. Those error messages though make me wonder if the connection is actually coming from https://dev.azure.com.
So the questions I have are:
For situations like this, is the connection to a service endpoint going to be seen as coming from my on-prem build agent, or from ADO (or does it vary based on how the vendor writes their plugin)?
If the answer to #1 is "it varies", is there any way for me to tell just from the plugin itself without having to contact the vendor? (In my experience some of the vendor reps don't understand how the cloud works.)
and/or
Because my build agent was configured to use a proxy when I set it up, is it going to use that proxy for all connections, even internal ones? I think I can set up a proxy bypass list for the agents but I presently only have read access to the build box. I can request temporary elevated access but I'd need some level of confidence that's what the issue is.
Hope I explained the situation clearly, thanks in advance for any insight.
I've read some articles recently on setting up AWS infrastructure w/o enabling SSH on Ec2 instances. My web app requires a binary to run. So how can I deploy my application to an ec2 instance w/o using ssh?
This was the article in question.
http://wblinks.com/notes/aws-tips-i-wish-id-known-before-i-started/
Although doable, like the article says, it requires to think about servers as ephemeral servers. A good example of this is web services that scale up and down depending on demand. If something goes wrong with one of the servers you can just terminate your server and spin up another one.
Generally, you can accomplish this using a pull model. For example at bootup pull your code from a git/mecurial repository and then execute scripts to setup your instance. The script will setup all the monitoring required to determine whether your server and application are up and running appropriately. You would still need an SSH client for this if you want to pull your code using ssh. (Although you could also do it through HTTPS)
You can also use configuration management tools that don't use ssh at all like Puppet or Chef. Essentially your node/server will pull all your application and server configuration from the Puppet master or the Chef server. The Puppet agent or Chef client would then perform all the configuration/deployment/monitoring changes for your application to run.
If you with this model I think one of the most critical components is monitoring. You need to know at all times if there's something wrong with one of your server and in the event something goes wrong discard the server and spin up a new one. (Even better if this whole process is automated)
Hope this helps.
I have a solution that has a self hosted WCF Service. That service connects to EF and can read and write just fine.
In the same solution I also host an NServiceBus endpoint. It gets event from a separate running solution.
When I run the NServiceBus project (by itself) it seems to be working fine, until I try to query my database. When I do that I get this EntityException:
The underlying provider failed on Open.
The inner exception is a TransactionException with a message of:
The partner transaction manager has disabled its support for remote/network transactions
Both my NServiceBus and WCF Service projects are using the exact same configurations and EF Projects. I don't get why one is failing and the other is not.
I did some Googling and came across this page: http://msdn.microsoft.com/en-us/library/aa561924%28BTS.20%29.aspx that showed me how to setup MSDTC, and I did it on my client machine. But it had no effect.
I also found this question that says it needs to be set: NServiceBus: System.Transactions.TransactionException: The partner transaction manager has disabled its support for remote/network transactions. But it does not say why or where.
Do I need to setup MSDTC on my db server? If so, why? What is MSDTC?
Why does running from an NServiceBus hosted process cause this error?
UPDATE: I found this link that helped me understand what DTC does. It also showed me how to turn it off if needed:
using (TransactionScope sc=new TransactionScope(TransactionScopeOption.Suppress))
YourDatabaseHandler.SaveMyStuff(whatever);
Though it sounds like it is a good thing in many situations.
Since NSB is using a transactional queue, your going to be participating in a distributed transaction. This means that each machine participating will have to vote on whether or not to complete a transaction. This is done via the Distributed Transaction Coordinator(MSDTC). This will have to be running on both machines(and if you are using other DBs like Oracle there is an additional service). To manage MSDTC, go to Component Services -> Computers -> My Computer -> Distributed Transaction Coordinator -> Local DTC. Right clicking on that node you will find all the configuration including security.
I'm deploying an ASP.NET MVC 2 application using Apache / mod_mono / MONO (2.8.1) that uses the built in ASP.NET authentication framework.
When I restart Apache, or use the mod_mono control panel to restart the mono server process, users are logged out. I don't want this occurring.
I'm using custom Profile / Membership / Role providers (that are backed by a Redis database), and these currently have a bare minimum implementation. I can not see where my problem fits in here however, but am I missing something obvious?
I notice that the .MONOAUTH cookie changes value when a user logs back in, so I guess there is some persistence that needs to happen that is not happening.
Any solutions or pointers to the relevant documentation would be great!
NOTE: I'm not sure if the information below differs when you're using a Membership Provider -- it may be that session state is persisted by the Membership Provider itself.
It's likely that you're using "in-process" session state storage. This means that whenever you restart the web server process, you're clearing out all the session information stored in the web server process's memory space.
To avoid wiping out session information, you can move to using an out-of-process session state server, either running as an in-memory service (see below for the Mono version) or on SQL Server. Otherwise there are also a number of unofficial custom session store providers that use alternative storage mechanisms (e.g. MongoDB etc.)
I found what you may want, which is this Mono ASP.NET Session State Server: http://manpages.ubuntu.com/manpages/gutsy/man1/asp-state2.1.html
As a first step, take a look in your web.config at the system.web -> sessionState property. If it's set to mode="InProc" then there's your problem. It should look more like:
<sessionState
mode="StateServer"
stateConnectionString="tcpip=server:port"
stateNetworkTimeout="number of seconds"/>
Solution: set the validationKey and decryptionKey manually:
<machineKey validationKey="blahblah" decryptionKey="blahblah" />
I think this is probably a bug in mono that these take on different values over server resets when auto-generated (which is the default).
I've been trying for quite some time to use Entity Framework with our IBM Informix databases. Hours of searching has pointed me towards installing the IBM .NET Data Server Provider, which I have installed, however when I attempt to add a new Entity Model to my project I only have the Microsoft SQL Server Data Providers listed. Am I missing a step? Is this even possible?
I am not an expert on Windows or .NET; treat any comments I make with due caution.
Installing the .NET Data Server Provider is an important first step. You now have to make sure that you can use it to connect to the Informix databases you want to manipulate. There are several things you'll need to check here:
Is the server (meaning the Informix instance) configured to allow DRDA connections?
By default, it probably isn't.
If you're the DBSA (database system administrator), you'll need to check that you've enabled 'drsoctcp' connections on the system, and configured a server alias to use that connection.
If you're not the DBSA, you'll need to chat with your DBSA to get the relevant information.
Assuming that you have DRDA connectivity enabled at the server side, you then need to ensure you have an appropriately configured ... DSN? Your client code needs to be able to connect to the server.
There is no reason I'm aware of why it cannot be done. However, I don't know exactly how to guide you step-by-step through any of the above.
You might need to seek assistance from IBM Technical Support.
You would help everyone if you clarified which version of Informix (the DBMS) you have, along with the version information for the platform where it is running (whether Windows or Unix, and the o/s version information) - and which version of the Data Server Provider you are using (and which variant of Windows you are using it on).