Audit Log Export and Aggregation - github

I have a complex infrastructure on Google Cloud Console, and a Github organization in which several members exist with different access levels. I would like to know if there is a tool or process that aggregates these kinds of logs from services like GCP and Github, and stores and displays them.

Related

IBM Cloud Foundry- deploying error could not create organisation names

Could not able to create cloud foundry orgs, even with the unique organisation name.
Even tried with different names but its not creating organisations in them.

AWS Proton vs CloudFormation

Recently, I went to the AWS Proton service, I also tried to do a hands-on service, unfortunately, I was not able to succeed.
What I am not able to understand is what advantage I am getting with Proton, because the end to end pipeline I can build using CodeCommit, CodeDeploy, CodePipeline, and CloudFormation.
It will be great if someone could jot down the use cases where Proton can be used compared to the components which I suggested above.
From what I understand, AWS Proton is similar to AWS Service Catalog in that it allows
administrators prepare some CloudFormation (CFN) templates which Developers/Users can provision when they need them. The difference is that AWS Service Catalog is geared towards general users, e.g. those who just want to start a per-configured instance by Administrators, or provision entire infrastructures from the set of approve architectures (e.g. instance + rds + lambda functions). In contrast, AWS Proton is geared towards developers, so that they can provision by themselves entire architectures that they need for developments, such as CICD pipelines.
In both cases, CFN is used as a primary way in which these architectures are defined and provisioned. You can think of AWS Service Catalog and AWS Proton as high level services, while CFN as low level service which is used as a building block for the two others.
because the end to end pipeline I can build using CodeCommit, CodeDeploy, CodePipeline, and CloudFormation
Yes, in both cases (AWS Service Catalog and AWS Proton) you can do all of that. But not everyone want's to do it. Many AWS users and developers do not have time and/or interest in defining all the solutions they need in CFN. This is time consuming and requires experience. Also, its not a good security practice to allow everyone in your account provision everything they need without any constrains.
AWS Service Catalog and AWS Proton solve these issues as you can pre-define set of CFN templates and allow your users and developers to easily provision them. It also provide clear role separation in your account, so you have users which manage infrastructure and are administrators, while the other ones users/developers. This way both these groups of users concentrate on what they know best - infrastructure as code and software development.

Spring Cloud Data Flow with Azure Event Hub limitations?

We plan to use Spring Cloud Data Flow on Azure Cloud using Azure EventHub as a messaging binder.
On Azure EventHub, there are hard limits :
100 Namespaces
10 topics per namespaces.
The Spring Cloud Azure Event Hub Stream Binder seems to be able to configure only one namespace, so how can we manage multiple namespaces?
Maybe we should use multiple binders, to have multiple instances of the Spring Cloud Azure Event Hub Stream Binder?
Does anyone have any ideas? or documentation we did not find?
Regards
Rémi
Spring Cloud Data Flow and Spring Cloud Skipper support the concept of "platform accounts". Using that, you can set up multiple accounts, for each namespace or any other K8s clusters even. This opens a lot of flexibility to work around these hard limits in Azure stack.
We have a recipe on multi-platform deployments.
When deploying the streams from SCDF, you'd pick and choose the platform account (aka namespace or other configs), so automatically the deployed stream apps (with Azure binder in the classpath) would be running in different namespaces. Effectively, dodging the limits enforced in Azure.
The provenance tracking of where the apps run and the audit trail is automatically also captured in SCDF, so at any given time, you'd know who did what and in which namespace.

Spring Cloud Data Flow: versioned streams

I'm implementing a stream pipe with Spring Cloud Data Flow.
My problem is that I configured MANUALLY the pipe (e.g. http | log_sink) in the server and it will be lost if I reset that server (think in an Amazon EC2 instance that can be hard reseted).
Which is the suggested way to keep versioning of pipes using SCDF?
Thanks.
I am summarizing the discussion from the comments.
To automate the promotion of Stream/Task workloads from lower to higher-level environments, the recommended approach would be the use of SCDF's Java DSL. With this, users can programmatically register, create, deploy, or launch stream/task in a repeatable manner and across many different platforms simultaneously (if there's a need for it). The Boot App built with the Java DSL can be versioned in Git, and it can be CD/GitOps friendly. With sufficient generalization to this App, it can also be reused by many different teams by overriding the defaults.
We put this for use in the product proper for or IT and Acceptance tests, which run on every upstream commit daily across multiple Kubernetes and Cloud Foundry installations.
Alternatively, all of the register, create, deploy, or launch stream/task commands can also be dumped in a text or a property file. Once when you have the file, the dataflow:>script --file command can help slurp in all the commands in each of the new environments — see docs.

Set Monitoring Level to Verbose in an Azure Web Role using Powershell

I've created some custom Performance Counters in our web application deployed to an Azure Web Role. In order to be able to see the values of that Performance Counters in the dashboard, I have to go to the portal, set the Monitoring Level to Verbose, and add the new Metrics in the dashboard.
The problem is that we are creating the infrastructure by code using PowerShell, and every time we recreate the infrastructure, we lost these settings.
Can I set the Monitoring Level and the Metrics (and possibly alerts) via PowerShell?
It seems that you cannot set the monitoring levels and metrics via PowerShell or the REST API. The only think you are allowed to do via REST is to create alerts: http://msdn.microsoft.com/en-us/library/azure/dn510366.aspx
Thanks.