How can I organise kerberos keytabs and ccaches? - postgresql

I have a bit of a problem understanding how to design a system that communicates using the kerberos protocol. Let's imagine - I have an application instance that has a large number of plugins that need to communicate with different services. For example, one plugin is responsible for working with postgres, another plugin is responsible for working with "windows AD". But I need these plugins not to have access to each other's services. I.e. postgres plugin should not be able to go to windows ad service and vice versa. Or if I have multiple instances of the postgres plugin running, there should be different service accesses for each of them.
What is the actual question - how do I store keytabs and/or ccaches so that each service has its own, restricted accesses from the others. Let's say the pgx library requires that there already be a TGT (ccache) on connection to the system, it can only be changed in the environment variable of the whole application. But what should I do if I need to create another connection in the same application, but with a different TGT? It would be nice if the pgx library could take the keytab and generate the TGT automatically with every connection, but unfortunately it doesn't know how to do this.
I just don't understand, how I could organize multiple connections from my application, taking into account that every plugin must have different accesses, and considering that several plugins can connect either to the same service, or to different ones

Related

How to use multiple authentication plugins in the same service in Kong

I am looking to use Cypress for end to end testing for some kubernetes applications. Typically, I access these applications via OIDC through kong, however cypress doesn't support this, but does support key-auth via an API key. Is there a way of setting up the service so that I can use both of these simultaneously?
I think you cannot use more than one authentication plugin in an XOR scenario. This would only work for AND as long as the plugins do not use the same headers.
I also faced this problem and I solved it by setting up one service (pointing to the backend) and multiple routes (one for normal traffic, one for test traffic). You then can activate different plugins on each route instead of sticking it to the service.
The only downside is the slightly different base path you use for testing, but I think this is less problematic than the downside of testing with a different way of authentication.

Wildfly won't deploy when datasource is unavailable

I am using wildfly-8.2.0.Final.
There are several databases that i have to connect to. However, some of them are only used for certain functionalities on the web application and they are not needed to be online all the time. So when the wildfly starts, some of the datasources may not be online. However, disconnection to any datasource causes wildfly to not deploy .war deployment and i cannot find any way to solve this problem. Is there a way?
UPDATE:
I have a single table on a remote database server. The user will be able to query the table via my web application. The thing is, I have almost no control over the mentioned database. When the web application starts, it could be offline. However, this would cause my web application to fail to start. What I want is being able to run queries on a remote database if it is online. If it is offline, the web page could fail or the query can be canceled. But the only thing that I don’t want is that my web application to be limited by a remote database that I may have no control over.
My previous solution was a workaround. I would run queries on the remote database via a local database which has a foreign table to the remote one. However, the local one reads all data on the remote table before applying any constraints on postgresql 9.5. As the remote table has a large number of rows and I am using lazy loading, it takes so long for a single query and defeats the whole purpose of the lazy loading.
I found a similar question, but there is no answer.
On wildfly, you can set the datasource so that it tries to reconnect periodically when it disconnects. In my case, the deployment should be successful initially for this to be helpful.
The deployment will failed if it references those datasources.
Also you could define but disable those datasources.

Duplicated code sections - move to a service?

I have a C# application that enables users to write a test and execute it (client). It also supports distributed execution over multiple machines using a central server and agents on said machines.
The agent is practically a duplication of the original execution ability but it is in a standalone solution.
We'd like to refactor that because:
Code duplication.
If a user will try to write and execute on a machine that runs an agent, there will be a problematic collision.
I'm considering 2 options:
Move this execution to a service, that both client and agent will use. I mean a service that will run locally, not a web service.
Merge client and agent - we'll have no agent, but the server will communicate with the client as an agent instead.
I have no experience in working with services. Are there any known advantages/disadvantages to either options?
A common library shared by both client and agent sounds more appropriate to allow simple cases such as just using the client and avoid the overhead of having to set up an extra service locally.

Can I create plugins for an Azure Worker Role ?

I would like to make Worker Role in azure that handles some behind the scene processing for a web role. In the web role i would like to upload a plugin (a DLL most likely) which becomes avalible for the worker role to use.
What about security? If i was to let 3th party people upload a dll to my azure worker role. Can i do anything to limit what it can do. Would not be nice if they could take control over the management API or something like this.
I am new to azure and exploring if its a platform to use for this project.
Last question, i noticed that i could remote desktop my cloud service. Could i upload binary programs to that and call that from the worker role aswell? (another kind of plugin).
There are a few things you might want to look at. Let's assume your Worker Role is an empty shell. After starting the Worker Role you could start a timer that runs every X minutes to get the latest assemblies from a blob storage container for example.
You can download these assemblies to a folder and use MEF to scan them and import all objects implementing IWorkerRolePlugin for example (this would be a custom interface you would create). MEF would be the best choice when you want to work with plugins. You could even create a custom catalog that directly links with a blob storage container.
Now about the security part. In your Worker Role you could for example create a restricted AppDomain to make sure these plugins can't do anything wrong. This code should get you started: Restricted AppDomain example
Try the Azure Plugin Library by Richard Astbury!
Sounds like Lokad.Cloud is just what you need.
It has an execution framework part which consists of worker roles capable of running what they have named a Cloud Service. It comes with a web console which allows you to add new CloudService implementations by uploading assemblies, and if you configure it to allow for Azure self management you can also adjust the number of worker instances through the web console.

Windows Azure production vs staging server and Facebook integration

We use Windows Azure Cloud services to host our application. One of the great features of Windows Azure is the Production/Staging model. You can have the clients of your application routed to your production server, while you can test your new code running on a staging server. For example, you can configure Azure to point a production server to http://www.coolapp.com while designating a staging server for the same app to something like this: http://7f8e9d5ba73a4f7ea9ebd65a02ee195d.cloudapp.net.
Physically both of these servers are publicly facing. If you were to know the cryptic URL of a staging server you would be able to browse to the app just as easily as you would browse to www.coolapp.com. However, the presence of a GUID in the URL makes it virtually impossible for someone to guess it, thus making the staging server "private". This gives a nice mechanism to the developers of an application to deploy and test the new bits on a staging server before releasing them to public. Once they make sure that things look good, with a flip of a switch they swap the two servers, making staging server a production server and vice versa.
This model creates a small problem for us in relation to Facebook integration. To be able to integrate Facebook plugins you have to register your app with them. FB will then issue an AppId and an AppSecret keys. These keys are tied to the URL of your application. So in order for my app to work with FB plugins I need to obtain one set of keys that is tied to 7f8e9d5ba73a4f7ea9ebd65a02ee195d.cloudapp.net, and another set that is tied to www.coolapp.com.
When I read about Windows Azure, they really urge developers to treat staging vs. production servers as the same. The only difference between them should be the URL. In other words, Azure does not recommend basing your app logic on which server the code happens to be running on as Azure has no inherent knowledge of this. Staging vs. production is just a handy "abstraction" if you will. I guess you see the problem here. In our example above, I have to use one set of keys issued by FB versus another depending on which URL (production vs. staging) my app is running at. I assume I am not the first one running into this problem. What are the correct ways of handling this? One obvious way is to sniff the URL property of the Request object and branch my logic that way. However, intuition tells me this is a hack. Any other ideas?
Regards,
Archil
The mechanisms I know of are:
using "production" within a totally separate service account to "testing" - this leaves "staging" within the production service to be used as an area for "deployment candidates" and provides a separate clean testing domain with a non-changing URL for earlier "dev and test" work.
using different .cscfg files for staging and production - and being careful to update this .cscfg before you do any live switching.
sniffing the incoming URL - as you suggest
Personally, I use the first of these techniques - its easy and it helps prevent nasty accidents
As an aside, one of the techniques we've used for "removing" the Guid from staging is to CNAME the Guid with a really short TTL on the DNS - this allows us to quickly and automatically update the CNAME record for the staging server when we deploy.