Disable anonymous access to buildbot web application - buildbot

I've deployed buildbot in cloud vms, docker, and such. I've been able to setup authentication, but could not disable anonymous access.
It so happens that, I really can't allow anonymous access since it is a private owned resource, worst of all in many logs from build steps, passwords and other sensitive information show up.
buildbot version: 0.9.8
Documentation is scarse/nonexistant on this subject.
Thanks in advance.

Buildbot itself only allows to disable access to REST API. So anonymous users will see 'empty' web interface with no builds, logs etc. Access to the web interface can be disabled only by external web server settings.
Example authz config:
c['www']['authz'] = util.Authz(
allowRules=[
util.AnyEndpointMatcher(role='admins', defaultDeny=False),
util.AnyControlEndpointMatcher(role='admins', defaultDeny=False),
util.AnyEndpointMatcher(role='anonymous')
],
2.5.12.5. Authorization rules
One can implement the default deny policy by putting an AnyEndpointMatcher with nonexistent role in the end of the list. Please note that this will deny all REST apis, and most of the UI do not implement proper access denied message in case of such error.

Related

Close proxy API access

Close proxy API access
Hi community,
Grafana 8.2.5
We have a Grafana system 8.2.5. He had a security audit, where the API access is criticized.
We have enabled an anonymous acess for users without login.
[auth.anonymous]
enabled =true
org_name = IT.NRW
org_role = Viewer
When I try to access the Grafana like:
curl http://<fqdn>:3000/api/datasources -> {"message":"Permission denied"}
curl http://admin:<password>#<fqdn>:3000/api/datasources -> a valid json object with the datasource etc....
But the security audit found also the access to the datasource proxy? API.
curl http://<fqdn>:3000/api/datasources/proxy/3/query?db=<db>\&q=SELECT+*+FROM+<ts>\&epoch=ms
So I can query with or without credentials ALWAYS the API.
Security audit: a Denial of Service (DoS) is possible, maybe some SQL injection.
I don't want discuss this topic here.
I have to close the access through the API. At least from other network segments.
Any hints?
Thanks in advance.
I'm a grafana beginner!
I do not complain, the security audit listed the two topics (DoS/SQL injection).
I didn't found any configuration possibilities (grafana.ini) about closing the proxy API interface (only data_source_whitelist-ing).
So, I added some rules into the NGIX config in front of the grafana server to
forbid the proxy API access -> throw 40x error.
Now the web UI is not able anymore to fetch and render the data in the UI.
My conclusion:
the grafana architecture define: the proxy API will be used by the web UIs.
with or without credentials: a user can fire a query (DoS) using the proxy API
with or without credentials: the query is pass through the proxy API to the datasource, potential sql injection is possible

How to configure RedHat APIMan Authorization Policy for unprotected endpoints?

We have installed and configured RedHat APIMan for our working API and the plan is migration form current home-grown tiny gateway to APIMan. The problem is that we have some unprotected endpoints which do not need login (Not everyone role! No login required at all). We are using Keycloak OAuth plugin for roles, and Authorization Policy for API security. When Authorization policy is not added, I can allow unauthenticated requests via a boolean value in Keycloak OAuth policy, but after adding Authorization policy, there is no way to let unauthenticated requests pass!
Kamyar. Apiman developer here.
Please file a feature request for this over at https://github.com/apiman/apiman/issues.
I think what you are trying to do may not currently be possible easily because the authentication policy is expecting a successful auth of some sort before it is hit (to get the roles, etc).
We probably need a slightly more detailed explanation of your use-case, and then we can figure out whether we can support it. It seems like it should be doable without major changes if I understand correctly.
If and when we add support for the specifics of your requirement, I will endeavour to update this ticket.

Kubernetes authentication method to use for calling the K8 apis?

i am quite new to kubernetes and I am looking towards certificate based authentication and token based authentication for calling K8 apis. To my understanding, I feel token based approach (openID + OAuth2) is better since id_token will get refreshed by refresh_token at a certain interval and it also works well with the login point(web browser) which is not the case with Certificate based approach . Any more thoughts to this ? I am working using minikube with kubernetes . Can anyone share their thoughts here ?
Prefer OpenID Connect or X509 Client Certificate-based authentication strategies over the others when authenticating users
X509 client certs: decent authentication strategy, but you'd have to address renewing and redistributing client certs on a regular basis
Static Tokens: avoid them due to their non-ephemeral nature
Bootstrap Tokens: same as static tokens above
Basic Authentication: avoid them due to credentials being transmitted over the network in cleartext
Service Account Tokens: should not be used for end-users trying to interact with Kubernetes clusters, but they are the preferred authentication strategy for applications & workloads running on Kubernetes
OpenID Connect (OIDC) Tokens: best authentication strategy for end users as OIDC integrates with your identity provider (e.g. AD, AWS IAM, GCP IAM ...etc)
I advice you to use OpenID Connect. OpenID Connect is based on OAuth 2.0. It is designed with more of an authentication focus in mind however. The explicit purpose of OIDC is to generate what is known as an id-token. The normal process of generating these tokens is much the same as it is in OAuth 2.0.
OIDC brings a step closer to providing with a user-friendly login experience and also to allow us to start restricting their access using RBAC.
Take also look on Dex which acts as a middleman in the authentication chain. It becomes the Identify Provider and issuer of ID tokens for Kubernetes but does not itself have any sense of identity. Instead, it allows you to configure an upstream Identity Provider to provide the users’ identity.
As well as any OIDC provider, Dex supports sourcing user information from GitHub, GitLab, SAML, LDAP and Microsoft. Its provider plugins greatly increase the potential for integrating with your existing user management system.
Another advantage that Dex brings is the ability to control the issuance of ID tokens, specifying the lifetime for example. It also makes it possible force your organization to re-authenticate. With Dex, you can easily revoke all tokens but there is no way to revoke a single token.
Dex also handles refresh tokens for users. When a user logs in to Dex they may be granted an id-token and a refresh token. Programs such as kubectl can use these refresh tokens to re-authenticate the user when the id-token expires. Since these tokens are issued by Dex, this allows you to stop a particular user refreshing by revoking their refresh token. This is really useful in the case of a lost laptop or phone.
Furthermore, by having a central authentication system such as Dex, you need only configure the upstream provider once.
An advantage of this setup is that if any user wants to add a new service to the SSO system, they only need to open a PR to Dex configuration. This setup also provides users with a one-button “revoke access” in the upstream identity provider to revoke their access from all of our internal services. Again this comes in very useful in the event of a security breach or lost laptop.
More information you can find here: kubernetes-single-sign-one-less-identity/, kubernetes-security-best-practices.

SSO with keycloak

We are considering to use the keycloak as our SSO framework.
According to the keycloak documentation for multi-tenancy support the application server should hold all the keycloak.json authentication files, the way to acquire those files is from the keycloak admin, is there a way to get them dynamically via API ? or at least to get the realm public key ? we would like to avoid to manually add this file for each realm to the application server (to avoid downtime, etc).
Another multi-tenancy related question - according to the documentation the same clients should be created for each realm, so if I have 100 realms and 10 clients, I should define the same 10 clients 100 times ? is there an alternative ?
One of our flows is backend micro-service that should be authenticated against an application (defined as keycloak client), we would like to avoid keeping user/psw on the server for security reasons, is there a way that an admin can acquire a token and place it manually on the server file system for that micro service ? is there a option to generate this token in the keycloak UI ?
Thanks in advance.
All Keycloak functionality is available via the admin REST API, so you can automate this. The realm's public key is available via http://localhost:8080/auth/realms/{realm}/
A realm for each tenant will give a tenant-specific login page. Therefore this is the way to go - 10 clients registered 100 times. See more in the chapter Client Registration of the Keycloak documentation. If you don't need specific themes, you can opt to put everything in one realm, but you will lose a lot of flexibility on that path.
If your backend micro service should appear like one (technical) user, you can issue an offline token that doesn't expire. This is the online documentation for offline tokens. Currently there is no admin functionality to retrieve an offline token for a user by an admin. You'll need to build this yourself. An admin can later revoke offline tokens using the given admin API.

Authenticating Gitweb with Gitosis without LDAP Auth?

I found your article using Apache Auth with gitweb, gitosis.
I was wondering if there was a way to do this if I wasn't using LDAP for authentication. We currently have a very large NIS domain which we use for authentication on all unix servers.
We use this for SVN repositories through a UI, but for this case I am trying to meet a requirement of:
Git Repositories
Access Controlled - using Gitolite
Online UI - using Gitweb
UI must also have Access Control - not yet implemented
I was thinking first I need to get Gitweb and Gitolite to play together and each one works at the moment individually.
If gitolite provides access using SSH-keys, then it can provide the access this way by having a key for each machine a user/developer will be accessing Gitweb/gitolite from.
Or if I can get gitweb to simply authenticate users from NIS domain since every user has an account that our IT department sets up this would be better.
Any ideas or howtos I can use to get further on this requirement?
The way you link gitweb and gitosis together is by:
having gitweb configuration files with names identical to NIS logins
having gitweb.conf (from gitolite) including in gitweb_config.perl from this blog post (add at the end of gitweb_config.perl:)
use lib (".");
require "gitweb.conf";
using a NIS authentication for your Apache2 httpd.conf (or extra/httpd-ssl.conf if you are using https)
Once a user is authenticated (be it with basic, LDAP or NIS auth), the $cgi->remote_user will be set and that is that login which will be passed (by the gitolite gitweb.conf) to the gitolite perl script managing Git access rights (ACLs).
The Git ACLs are still managed by ssh key and are independent from the login mechanism, except for the login part which enable gitolite to make the right account association.