Access to audit events of UAA (User Account and authentication) events in Swisscom cloud - swisscomdev

Is it possible to get access to events generated by User Account and Authentication (UAA) server in the context of Swisscom Application Cloud?
It is essential for me, to be able to have an audit trail of actions executed by authorised operators through the API (that would include cli and portal).
What I am looking for is an alternative of AWS CloudTrail for IAM module, that you can turn on for specific VPCs / regions there.
I have found this in the CF documentation (https://docs.cloudfoundry.org/loggregator/cc-uaa-logging.html) but that (as far as I understand it) requires infrastructure level access.
Thanks a lot for any hints.

We can't expose UAA logs to individual customers since it contains probably sensitive information about other users or the platform.
You should be able to retrieve the logs of your application in the application logs (which you can send to a syslog drain, i.e. the ELK/Elasticsearch service).
All API interactions should be covered by this log stream, according to the documentation:
Users make API calls to request changes in app state. Cloud Controller, the Cloud Foundry component responsible for the API, logs the actions that Cloud Controller takes in response.
For example:
2016-06-14T14:10:05.36-0700 [API/0] OUT Updated app with guid cdabc600-0b73-48e1-b7d2-26af2c63f933 ({"name"=>"spring-music", "instances"=>1, "memory"=>512, "environment_json"=>"PRIVATE DATA HIDDEN"})
From https://docs.cloudfoundry.org/devguide/deploy-apps/streaming-logs.html

Related

How to auth google cloud API from Java in the same way I authenticated with gcloud CLI

Using gcloud command line I can do the following operation
gcloud builds describe 74f859e9-d621-4632-b6dd-XXXXXXXX
However I wish to use the Google Cloud API from Java, now as I understand the GCloud CLI is not using a service account, it is using a user account. How can I use the same authentication from Google Cloud Java API to do this same operation to describe a build?
Google provides decent documentation that explains how to use its SDKs (Client Libraries) with all of its services.
Here's the Cloud Build client libraries documentation. Pick your preferred language and go.
If you can't use one of Google's SDKs, then you can write code directly against the underlying API. Google's APIs Explorer is an excellent tool for navigating all Google's services. Here's Cloud Build and projects.builds.get which I think (!?) maps to gcloud build describe. You can confirm that by running gcloud builds describe --log-http to see which underlying calls are made.
Code that doesn't access user data (data owned by a user account), should run as a Service Account. Code that accesses user data or operates on behalf of a user, should use the OAuth flow for the user and use an OAuth Client ID. This is what gcloud does. As a program operating on behalf of users, it authenticates you the user using a regular OAuth flow but it operates using an OAuth Client ID against a hidden backing project. Your code should probably just run as a service account.

Alternatives to JSON file credentials?

My Java backend server has to upload files to the Google Cloud Storage (GCS).
Right now I just run
public void store(MultipartFile multipartFile) throws IOException {
Storage storage = StorageOptions.getDefaultInstance().getService();
storage.create(
BlobInfo.newBuilder(
BUCKET_NAME,
Objects.requireNonNull(multipartFile.getOriginalFilename()))
.build(),
multipartFile.getBytes()
);
}
Having set GOOGLE_APPLICATION_CREDENTIALS=$PROJECT_DIR$/project-1234-abcdefg.json in my environment.
However, this makes things complicated for my deployment setup. I don't know how I would go about making this file available to my service.
Is there another way to get access to GCS for my service account?
Background
I am deploying my server to Heroku as a compiled jar file and I don't know how to make the credentials available to my server during deployment.
You need a Google Account to access to GCS, either personal or technical. Technical is a service account.
However, you have another solution, but not really easy to implement. I wrote an article for securing serverless product with Cloud Endpoint with and API Key. Here your serverless solution can be Cloud Storage. But that implies that you call GCS with REST API and not with the java library, not very fun. That also implies additional cost for the hosting and the processing time of Cloud Endpoint.
Note: you can improve the authorization from API Key to Firebase auth or something else if you prefer. Check the Cloud Endpoint authentication capabilities
Note2: Google is working on another authentication mechanism but I don't know at which stage are the developments, and if it's plan for 2020. In any case, your constraint is known and addressed by Google

Retrieve logged user information from cloud foundry web application

We developed a web application using SAP Web-IDE Full Stack; we need to retrieve the details of the user logged into application (as defined in SAP Cloud Platform Identity Authentication Administration), for example display name and assigned groups.
We tried the userapi/currentUser API, but it seems to work only on NEO environment, for this reason is working fine while debugging in Web-IDE, but we get a 404 error when deploying the app on Cloud Foundry.
Do we need to add a new destination to make userapi work also on CF? Or is there some kind of similar solution available on Cloud Foundry?
I highly suggest using the SAP S/4HANA Cloud SDK for such tasks. It is an SDK developed to make building applications for SAP Cloud Platform easy, by providing easy to use mechanisms for all the Cloud Platform mechanisms.
Regarding your task at hand, there is a UserAccessor class that you can use like this:
final Optional<User> user = UserAccessor.getCurrentUser();
This works on Neo as well as on Cloud Foundry, i.e. there is a single interface for both platforms, which allows you to develop your app in a platform agnostic way.
If this sounds like it could solve your problem, I recommend checking out this blog post series to get started.
Alternatively, you can also simply add the following dependency to your project to start testing the SDK:
<dependency>
<groupId>com.sap.cloud.s4hana.cloudplatform</groupId>
<artifactId>scp-neo</artifactId>
<version>2.7.0</version>
</dependency>
For Cloud Foundry use scp-cf instead of scp-neo.
Hope this helps!
P.S.: To answer your question also on a technical level, Cloud Foundry uses so-called JWTs for authentication and authorization. You can check whether a JWT is present by looking at the Authorization header of the request. The JWT should hold the information you're looking for.
In SAP Cloud Foundry if you develop a MTA using XSUAA service to manage User Authentication and Admistration, defined for example in the mta.yaml,
...
resources:
- name: uaa_myapp
parameters:
path: ./xs-security.json
service-plan: application
service: xsuaa
type: org.cloudfoundry.managed-service
...
you can use the UAA API published from XSUAA service self to manage user authentication and authorization (e.g.: retrieve user info, groups assigned, password management etc..). also in the case the application is federated with another IDP.
To consume this API for example to retrieve user info you need to:
Determine the XSUAA endpoint bound to your app (SCP Cockpit > XSUAA service detail > take the value url)
Create a destination (xsuaa_api_destination) of type OAuth2TokenExchange bound to your app with url url took before, and fill OAuth2 authentication parameters with the data contained in XSUAA service detail (step 1).
From your app execute the call xsuaa_api_destination/userinfo, for example using an ajax if you are using JS.
You can find other info in Account and Authentication Service of the Cloud Foundry Environment SAP doc.

Google Cloud Storage 500 Internal Server Error 'Google::Cloud::Storage::SignedUrlUnavailable'

Trying to get Google Cloud Storage working on my app. I successfully saved an image to a bucket, but when trying to retrieve the image, I receive this error:
GCS Storage (615.3ms) Generated URL for file at key: 9A95rZATRKNpGbMNDbu7RqJx ()
Completed 500 Internal Server Error in 618ms (ActiveRecord: 0.2ms)
Google::Cloud::Storage::SignedUrlUnavailable (Google::Cloud::Storage::SignedUrlUnavailable):
Any idea of what's going on? I can't find an explanation for this error in their documentation.
To provide some explanation here...
Google App Engine (as well as Google Compute Engine, Kubernetes Engine, and Cloud Run) provides "ambient" credentials associated with the VM or instance being run, but only in the form of OAuth tokens. For most API calls, this is sufficient and convenient.
However, there are a small number of exceptions, and Google Cloud Storage is one of them. Recent Storage clients (including the google-cloud-storage gem) may require a full service account key to support certain calls that involve signed URLs. This full key is not provided automatically by App Engine (or other hosting environments). You need to provide one yourself. So as a previous answer indicated, if you're using Cloud Storage, you may not be able to depend on the "ambient" credentials. Instead, you should create a service account, download a service account key, and make it available to your app (for example, via the ActiveStorage configs, or by setting the GOOGLE_APPLICATION_CREDENTIALS environment variable).
I was able to figure this out. I had been following Rail's guide on Active Storage with Google Storage Cloud, and was unclear on how to generate my credentials file.
google:
service: GCS
credentials: <%= Rails.root.join("path/to/keyfile.json") %>
project: ""
bucket: ""
Initially, I thought I didn't need a keyfile due to this sentence in Google's Cloud Storage authentication documentation:
If you're running your application on Google App Engine or Google
Compute Engine, the environment already provides a service account's
authentication information, so no further setup is required.
(I am using Google App Engine)
So I commented out the credentials line and started testing. Strangely, I was able to write to Google Cloud Storage without issue. However, when retrieving the image I would receive the 500 server error Google::Cloud::Storage::SignedUrlUnavailable.
I fixed this by generating my private key and adding it to my rails app.
Another possible solution as of google-cloud-storage gem version 1.27 in August 2020 is documented here. My Google::Auth.get_application_default as in the documentation returned an empty object, but using Google::Cloud::Storage::Credentials.default.client instead worked.
If you get Google::Apis::ClientError: badRequest: Request contains an invalid argument response when signing check that you have dash in the project name in the signing URL (i.e projects/-/serviceAccounts explicit project name in the path is deprecated and no longer valid) and that you have "issuer" string correct, as the full email address identifier of the service account not just the service account name.
If you get Google::Apis::ClientError: forbidden: The caller does not have permission verify the roles your Service Account have:
gcloud projects get-iam-policy <project-name>
--filter="bindings.members:<sa_name>"
--flatten="bindings[].members" --format='table(bindings.role)'
=> ROLE
roles/iam.serviceAccountTokenCreator
roles/storage.admin
serviceAccountTokenCreator is required to call the signBlob service, and you need storage.admin to have ownership of the thing you need to sign. I think these are project global rights, I couldn't get it to work with more fine grained permissions unfortunately (i.e one app is admin for a certain Storage bucket)

SSO with keycloak

We are considering to use the keycloak as our SSO framework.
According to the keycloak documentation for multi-tenancy support the application server should hold all the keycloak.json authentication files, the way to acquire those files is from the keycloak admin, is there a way to get them dynamically via API ? or at least to get the realm public key ? we would like to avoid to manually add this file for each realm to the application server (to avoid downtime, etc).
Another multi-tenancy related question - according to the documentation the same clients should be created for each realm, so if I have 100 realms and 10 clients, I should define the same 10 clients 100 times ? is there an alternative ?
One of our flows is backend micro-service that should be authenticated against an application (defined as keycloak client), we would like to avoid keeping user/psw on the server for security reasons, is there a way that an admin can acquire a token and place it manually on the server file system for that micro service ? is there a option to generate this token in the keycloak UI ?
Thanks in advance.
All Keycloak functionality is available via the admin REST API, so you can automate this. The realm's public key is available via http://localhost:8080/auth/realms/{realm}/
A realm for each tenant will give a tenant-specific login page. Therefore this is the way to go - 10 clients registered 100 times. See more in the chapter Client Registration of the Keycloak documentation. If you don't need specific themes, you can opt to put everything in one realm, but you will lose a lot of flexibility on that path.
If your backend micro service should appear like one (technical) user, you can issue an offline token that doesn't expire. This is the online documentation for offline tokens. Currently there is no admin functionality to retrieve an offline token for a user by an admin. You'll need to build this yourself. An admin can later revoke offline tokens using the given admin API.