How to use multiple authentication plugins in the same service in Kong - kubernetes

I am looking to use Cypress for end to end testing for some kubernetes applications. Typically, I access these applications via OIDC through kong, however cypress doesn't support this, but does support key-auth via an API key. Is there a way of setting up the service so that I can use both of these simultaneously?

I think you cannot use more than one authentication plugin in an XOR scenario. This would only work for AND as long as the plugins do not use the same headers.
I also faced this problem and I solved it by setting up one service (pointing to the backend) and multiple routes (one for normal traffic, one for test traffic). You then can activate different plugins on each route instead of sticking it to the service.
The only downside is the slightly different base path you use for testing, but I think this is less problematic than the downside of testing with a different way of authentication.

Related

How can I organise kerberos keytabs and ccaches?

I have a bit of a problem understanding how to design a system that communicates using the kerberos protocol. Let's imagine - I have an application instance that has a large number of plugins that need to communicate with different services. For example, one plugin is responsible for working with postgres, another plugin is responsible for working with "windows AD". But I need these plugins not to have access to each other's services. I.e. postgres plugin should not be able to go to windows ad service and vice versa. Or if I have multiple instances of the postgres plugin running, there should be different service accesses for each of them.
What is the actual question - how do I store keytabs and/or ccaches so that each service has its own, restricted accesses from the others. Let's say the pgx library requires that there already be a TGT (ccache) on connection to the system, it can only be changed in the environment variable of the whole application. But what should I do if I need to create another connection in the same application, but with a different TGT? It would be nice if the pgx library could take the keytab and generate the TGT automatically with every connection, but unfortunately it doesn't know how to do this.
I just don't understand, how I could organize multiple connections from my application, taking into account that every plugin must have different accesses, and considering that several plugins can connect either to the same service, or to different ones

How Do Service Connections Work For On-Prem Agents Connecting To On-Prem Services?

This question is purposefully general because I'm trying to understand things more from an architectural perspective, because that will impact which group I need to contact. My team is using Azure DevOps (cloud) with on-prem build agents. The agents connect to ADO via a proxy.
We use several tools in-house provided by vendors with ADO plugins in the Marketplace that require us to set up service connections. Because the services are installed on-prem, the endpoints we enter are not available via the Web (e.g. https://vendor-product.my-company.com).
If I log into the build machine and open up IE, I am able to connect to the service endpoint URL. However, whenever I try to run a task from ADO, it fails with some kind of connection-related issue ("The underlying connection was closed: An unexpected error occurred on a send", "Task ended with an exception: Error: read ECONNRESET", etc.).
The way I thought it worked, all the work takes place on the build machine itself, so the calls would be going from my-build-server.my-company.com to https://vendor-product.my-company.com. Those error messages though make me wonder if the connection is actually coming from https://dev.azure.com.
So the questions I have are:
For situations like this, is the connection to a service endpoint going to be seen as coming from my on-prem build agent, or from ADO (or does it vary based on how the vendor writes their plugin)?
If the answer to #1 is "it varies", is there any way for me to tell just from the plugin itself without having to contact the vendor? (In my experience some of the vendor reps don't understand how the cloud works.)
and/or
Because my build agent was configured to use a proxy when I set it up, is it going to use that proxy for all connections, even internal ones? I think I can set up a proxy bypass list for the agents but I presently only have read access to the build box. I can request temporary elevated access but I'd need some level of confidence that's what the issue is.
Hope I explained the situation clearly, thanks in advance for any insight.

SAML integration with Nagios

I am trying to integrate Nagios with SimpleSAMLPHP for SingleSignOn company wide.
I have installed simplesaml and Nagios on apache.
Looking out for configuration settings wherein I can specify SAML settings for Nagios.
Has anyone worked on it?
I don't think it's possible to do this without setting the Apache REMOTE_USER variable. The Nagios server is rendered by php and cgi scripts. The php scripts have a user variable attached to the Apache server's REMOTE_USER, and this can be removed to substitute for any other php variable. BUT, most of the functionality and restrictions are being processed in the cgi scripts.
Based on this issue on the Nagios Core GitHub repo, it doesn't look like they are planning on offering any other methods of passing information to the cgi scripts. Nagios X doesn't appear to offer any options either based on this forum post
For authentication in Nagios Core these may be the best options:
Continue using Htaccess Authentication
Wrap the entire application within another to protect it (but you lose per-user privileges)
Manually change the Apache REMOTE_USER variable when someone logs in (maybe a security risk if at all possible)
Migrate to Icinga, which does offer more authentication options
At my place of employment, we will likely be using #3 or #4 based on what's possible and more secure.

AWS deployment without using SSH

I've read some articles recently on setting up AWS infrastructure w/o enabling SSH on Ec2 instances. My web app requires a binary to run. So how can I deploy my application to an ec2 instance w/o using ssh?
This was the article in question.
http://wblinks.com/notes/aws-tips-i-wish-id-known-before-i-started/
Although doable, like the article says, it requires to think about servers as ephemeral servers. A good example of this is web services that scale up and down depending on demand. If something goes wrong with one of the servers you can just terminate your server and spin up another one.
Generally, you can accomplish this using a pull model. For example at bootup pull your code from a git/mecurial repository and then execute scripts to setup your instance. The script will setup all the monitoring required to determine whether your server and application are up and running appropriately. You would still need an SSH client for this if you want to pull your code using ssh. (Although you could also do it through HTTPS)
You can also use configuration management tools that don't use ssh at all like Puppet or Chef. Essentially your node/server will pull all your application and server configuration from the Puppet master or the Chef server. The Puppet agent or Chef client would then perform all the configuration/deployment/monitoring changes for your application to run.
If you with this model I think one of the most critical components is monitoring. You need to know at all times if there's something wrong with one of your server and in the event something goes wrong discard the server and spin up a new one. (Even better if this whole process is automated)
Hope this helps.

Windows Azure production vs staging server and Facebook integration

We use Windows Azure Cloud services to host our application. One of the great features of Windows Azure is the Production/Staging model. You can have the clients of your application routed to your production server, while you can test your new code running on a staging server. For example, you can configure Azure to point a production server to http://www.coolapp.com while designating a staging server for the same app to something like this: http://7f8e9d5ba73a4f7ea9ebd65a02ee195d.cloudapp.net.
Physically both of these servers are publicly facing. If you were to know the cryptic URL of a staging server you would be able to browse to the app just as easily as you would browse to www.coolapp.com. However, the presence of a GUID in the URL makes it virtually impossible for someone to guess it, thus making the staging server "private". This gives a nice mechanism to the developers of an application to deploy and test the new bits on a staging server before releasing them to public. Once they make sure that things look good, with a flip of a switch they swap the two servers, making staging server a production server and vice versa.
This model creates a small problem for us in relation to Facebook integration. To be able to integrate Facebook plugins you have to register your app with them. FB will then issue an AppId and an AppSecret keys. These keys are tied to the URL of your application. So in order for my app to work with FB plugins I need to obtain one set of keys that is tied to 7f8e9d5ba73a4f7ea9ebd65a02ee195d.cloudapp.net, and another set that is tied to www.coolapp.com.
When I read about Windows Azure, they really urge developers to treat staging vs. production servers as the same. The only difference between them should be the URL. In other words, Azure does not recommend basing your app logic on which server the code happens to be running on as Azure has no inherent knowledge of this. Staging vs. production is just a handy "abstraction" if you will. I guess you see the problem here. In our example above, I have to use one set of keys issued by FB versus another depending on which URL (production vs. staging) my app is running at. I assume I am not the first one running into this problem. What are the correct ways of handling this? One obvious way is to sniff the URL property of the Request object and branch my logic that way. However, intuition tells me this is a hack. Any other ideas?
Regards,
Archil
The mechanisms I know of are:
using "production" within a totally separate service account to "testing" - this leaves "staging" within the production service to be used as an area for "deployment candidates" and provides a separate clean testing domain with a non-changing URL for earlier "dev and test" work.
using different .cscfg files for staging and production - and being careful to update this .cscfg before you do any live switching.
sniffing the incoming URL - as you suggest
Personally, I use the first of these techniques - its easy and it helps prevent nasty accidents
As an aside, one of the techniques we've used for "removing" the Guid from staging is to CNAME the Guid with a really short TTL on the DNS - this allows us to quickly and automatically update the CNAME record for the staging server when we deploy.