Duplicated code sections - move to a service? - service

I have a C# application that enables users to write a test and execute it (client). It also supports distributed execution over multiple machines using a central server and agents on said machines.
The agent is practically a duplication of the original execution ability but it is in a standalone solution.
We'd like to refactor that because:
Code duplication.
If a user will try to write and execute on a machine that runs an agent, there will be a problematic collision.
I'm considering 2 options:
Move this execution to a service, that both client and agent will use. I mean a service that will run locally, not a web service.
Merge client and agent - we'll have no agent, but the server will communicate with the client as an agent instead.
I have no experience in working with services. Are there any known advantages/disadvantages to either options?

A common library shared by both client and agent sounds more appropriate to allow simple cases such as just using the client and avoid the overhead of having to set up an extra service locally.

Related

How can I organise kerberos keytabs and ccaches?

I have a bit of a problem understanding how to design a system that communicates using the kerberos protocol. Let's imagine - I have an application instance that has a large number of plugins that need to communicate with different services. For example, one plugin is responsible for working with postgres, another plugin is responsible for working with "windows AD". But I need these plugins not to have access to each other's services. I.e. postgres plugin should not be able to go to windows ad service and vice versa. Or if I have multiple instances of the postgres plugin running, there should be different service accesses for each of them.
What is the actual question - how do I store keytabs and/or ccaches so that each service has its own, restricted accesses from the others. Let's say the pgx library requires that there already be a TGT (ccache) on connection to the system, it can only be changed in the environment variable of the whole application. But what should I do if I need to create another connection in the same application, but with a different TGT? It would be nice if the pgx library could take the keytab and generate the TGT automatically with every connection, but unfortunately it doesn't know how to do this.
I just don't understand, how I could organize multiple connections from my application, taking into account that every plugin must have different accesses, and considering that several plugins can connect either to the same service, or to different ones

How Do Service Connections Work For On-Prem Agents Connecting To On-Prem Services?

This question is purposefully general because I'm trying to understand things more from an architectural perspective, because that will impact which group I need to contact. My team is using Azure DevOps (cloud) with on-prem build agents. The agents connect to ADO via a proxy.
We use several tools in-house provided by vendors with ADO plugins in the Marketplace that require us to set up service connections. Because the services are installed on-prem, the endpoints we enter are not available via the Web (e.g. https://vendor-product.my-company.com).
If I log into the build machine and open up IE, I am able to connect to the service endpoint URL. However, whenever I try to run a task from ADO, it fails with some kind of connection-related issue ("The underlying connection was closed: An unexpected error occurred on a send", "Task ended with an exception: Error: read ECONNRESET", etc.).
The way I thought it worked, all the work takes place on the build machine itself, so the calls would be going from my-build-server.my-company.com to https://vendor-product.my-company.com. Those error messages though make me wonder if the connection is actually coming from https://dev.azure.com.
So the questions I have are:
For situations like this, is the connection to a service endpoint going to be seen as coming from my on-prem build agent, or from ADO (or does it vary based on how the vendor writes their plugin)?
If the answer to #1 is "it varies", is there any way for me to tell just from the plugin itself without having to contact the vendor? (In my experience some of the vendor reps don't understand how the cloud works.)
and/or
Because my build agent was configured to use a proxy when I set it up, is it going to use that proxy for all connections, even internal ones? I think I can set up a proxy bypass list for the agents but I presently only have read access to the build box. I can request temporary elevated access but I'd need some level of confidence that's what the issue is.
Hope I explained the situation clearly, thanks in advance for any insight.

Remote execute Power Shell scripts to collect data

I am looking to collect data snapshot on a random interval from various machines in our network that we don't own, but may get access to install an agent to collect these data.
These machines are either in a domain or work-group and kind of data i get are based on the role they play and information they have. The machines are "Windows Server 2003" and above and I do not want to install anything on those machines before i get started, so thought I can use the PowerShell scripts that I can remote invoke form my server and pass the script it has to run to return the data.
I was wondering if this is possible to do that with the PowerShell scripts and as this is supposed to run in a secure environment, is there any major security implications with this approach. i.e. do I need to do anything on the client machines that can make them vulnerable to security threats.
BTW these machines are not exposed to internet and are behind a firewall.
I would appreciate if you point me to any other alternatives that can be useful for my analysis.
Regards
Kiran

AWS deployment without using SSH

I've read some articles recently on setting up AWS infrastructure w/o enabling SSH on Ec2 instances. My web app requires a binary to run. So how can I deploy my application to an ec2 instance w/o using ssh?
This was the article in question.
http://wblinks.com/notes/aws-tips-i-wish-id-known-before-i-started/
Although doable, like the article says, it requires to think about servers as ephemeral servers. A good example of this is web services that scale up and down depending on demand. If something goes wrong with one of the servers you can just terminate your server and spin up another one.
Generally, you can accomplish this using a pull model. For example at bootup pull your code from a git/mecurial repository and then execute scripts to setup your instance. The script will setup all the monitoring required to determine whether your server and application are up and running appropriately. You would still need an SSH client for this if you want to pull your code using ssh. (Although you could also do it through HTTPS)
You can also use configuration management tools that don't use ssh at all like Puppet or Chef. Essentially your node/server will pull all your application and server configuration from the Puppet master or the Chef server. The Puppet agent or Chef client would then perform all the configuration/deployment/monitoring changes for your application to run.
If you with this model I think one of the most critical components is monitoring. You need to know at all times if there's something wrong with one of your server and in the event something goes wrong discard the server and spin up a new one. (Even better if this whole process is automated)
Hope this helps.

Is This MSDTC configuration Issue?

It seems I am running into the Microsoft Distributed Transaction Coordinator (MSDTC) related issue.
SCENARIO
I am using TransactionScope and with in the single scope it hits two different databases on different servers (for instance, DB_A running Windows Server 2003 and DB_B running Windows Server 2008). One database is accessed using Entity Framework 4.0 and another using normal ADO.NET APIs.
When I run the application from my development machine (running WinXP) it commits and rollbacks both the connections accurately. But when I run the application, deployed on another server (for instance WAS_A running Windows Server 2003) it commits correctly but in case of exception is doesn't roll back the database activities on both the servers.
I thought it would be the MSDTC configuration issue on the WAS_A. So I went to the MSDTC -> Security Configuration and checked all the available options (as I did previously on other machines). But still I am facing the same issue.
Looking for your expert advices. :)
I believe that you need to look into Enabling Transaction Flow. Specifically, take a look at how one may error and the other complete as described in TransactionScope and WCF Services:
an error in a second WCF service call was NOT rolling back the changes made in a previous WCF service call...
In order to create an ambient transaction in your client and ensure that it is used by your WCF services...
The article then details the following steps:
Configure Your Binding with transactionFlow
Decorate Your Interface with [TransactionFlow(TransactionFlowOption)]
Decorate Your Method with [OperationBehavior(TransactionScopeRequired)]
Optionally update your Connection Strings with Transaction Binding*
*Note: This is optional in my opinion.