Our infrastructure is hosted on Google Cloud and uses postgresql instances via Cloud SQL
I need to configure logging for HIPAA compliance.
I have read 2 articles from Google's documentation:
https://cloud.google.com/logging/docs/audit/configure-data-access#config-console
https://cloud.google.com/sql/docs/postgres/pg-audit#overview
The first talks about enabling Audit Logs from within IAM, here I can select Cloud SQL and enable r+w logs for data and admins
The second talks about PgAudit and sets the following flag pgaudit.log=all
I have a couple of questions:
How do IAM logs and PgAudit differ, should I enable both or is there redundancy by doing so?
For HIPAA compliance using PgAudit, should I log all or is there another value that makes sense
How do IAM logs and PgAudit differ, should I enable both or is there redundancy by doing so?
Well the IAM Logs focus on Admin Activity and data access:
Admin Activity audit logs: Includes "admin write" operations that
write metadata or configuration information.
Data Access audit logs: Includes "admin read" operations that read
metadata or configuration information. Also includes "data read" and
"data write" operations that read or write user-provided data.
On the other hand the pgAudit extension applies to executed SQL commands and queries.
Basic statement logging can be provided by the standard logging
facility with log_statement = all. This is acceptable for monitoring
and other usages but does not provide the level of detail generally
required for an audit. It is not enough to have a list of all the
operations performed against the database. It must also be possible to
find particular statements that are of interest to an auditor. The
standard logging facility shows what the user requested, while pgAudit
focuses on the details of what happened while the database was
satisfying the request.
For HIPAA compliance using PgAudit, should I log all or is there another value that makes sense
When it comes to HIPAA compliance, I do not have any experience in the topic, but in this page it is mentioned that part of the Technical safeguards of HIPAA security rule is to introduce activity logs and audit controls.
Maybe combining the IAM logs (Who did what, where, and when?) with the pgAudit(executed commands and queries) will provide better coverage to face this implementation specification.
Related
This question is about infosec, data privacy, specifically HIPAA compliance on GCP.
Is there any advantages for self managing Postgres server (built on GCP Compute instances using lets say Terraform) my own Vs using the managed offering, i,e. Cloud SQL
Thanks in advance
Google Cloud SQL Postgres is a fully managed option for deploying PostgreSQL to Google Cloud. The fully managed option is convenient, but is mainly suitable for cloud-native applications, or applications rebuilt for the cloud.
It has Built-in encryption for database tables, temporary files, backups, and any data transferred over Google’s internal networksSecure connections via SSL/TLS or the Cloud SQL Proxy.
Update1
As you are referring to HIPAA You can check this guide for HIPAA Compliance on Google Cloud Cloud sql encrypts the data at rest using the 256-bit Advanced Encryption Standard (AES-256), or better, with symmetric keys: that is, the same key is used to encrypt the data when it is stored, and to decrypt it when it is used. You can use your own encryptions as well with CMEK for cloud sql
And also you mentioned Infosec. I have not completely understood the term. I assume that you are referring to securing information from vulnerabilities. You can use Cloud Armor, which is a network security service that provides defenses against DDoS and application attacks like cross-site scripting (XSS) and SQL injection (SQLi).
Self hosted Postgres gives you full control over your PostgreSQL database on GCP, letting you to fine-tune server parameters, modify database configuration, and tune performance, just like in a local deployment.
Update2
As per this thread, it seems like postgresql is not HIPAA compliant.
For Encryption at rest on postgresql use can PostgreSQL TDE and Pgcrypto as discussed in this similar thread
For self hosted postgres You can also use shielded VM using which you can protect enterprise workloads from threats like remote attacks, privilege escalation, and malicious insiders
I am not sure on your application requirement, But based upon my
understanding about both cloud sql and self hosted postgres I
would recommend considering cloud sql as the best option as it is
fully managed by google and also complies with HIPAA and encryption.
For more information about pros and cons of Google Cloud SQL Postgres and Self hosted Postgres, Check this document
Confluent has something called Audit Logs which are written to internal topics (if configured) and which log access to Kafka resources such as clients writing or reading from particular topics. That's all great, however, there are components in a Confluent/Kafka setup such as Confluent Control Center which (should) have a RBAC access set up for users to log in and use.
It is possible to set it up and have users log in with username and password but I am having difficulties in locating where exactly Confluent provides logs of successful/unsuccessful login attempts. If I set the Control Center logs to DEBUG, I can see the HTTP communication and the password lookup for when a user tries to login but I don't see an option for admin review and control of such events. Audit Logs are apparently only for Kafka resources. Is there no other option other than building up a custom solution, scraping DEBUG logs?
The list of auditable events is documented here. Confluent Control Center login events are unfortunately not one of them.
Is it possible to get access to events generated by User Account and Authentication (UAA) server in the context of Swisscom Application Cloud?
It is essential for me, to be able to have an audit trail of actions executed by authorised operators through the API (that would include cli and portal).
What I am looking for is an alternative of AWS CloudTrail for IAM module, that you can turn on for specific VPCs / regions there.
I have found this in the CF documentation (https://docs.cloudfoundry.org/loggregator/cc-uaa-logging.html) but that (as far as I understand it) requires infrastructure level access.
Thanks a lot for any hints.
We can't expose UAA logs to individual customers since it contains probably sensitive information about other users or the platform.
You should be able to retrieve the logs of your application in the application logs (which you can send to a syslog drain, i.e. the ELK/Elasticsearch service).
All API interactions should be covered by this log stream, according to the documentation:
Users make API calls to request changes in app state. Cloud Controller, the Cloud Foundry component responsible for the API, logs the actions that Cloud Controller takes in response.
For example:
2016-06-14T14:10:05.36-0700 [API/0] OUT Updated app with guid cdabc600-0b73-48e1-b7d2-26af2c63f933 ({"name"=>"spring-music", "instances"=>1, "memory"=>512, "environment_json"=>"PRIVATE DATA HIDDEN"})
From https://docs.cloudfoundry.org/devguide/deploy-apps/streaming-logs.html
When creating users in the DB2 console on IBM Cloud is there a way to resrtict user access to selected schema/s only? I am running on the SMP Small version.
Different options exist, can be mixed depending on your security model and operational requirements. This is a large topic. Refer to the Db2 documentation for details of CREATE ROLE and GRANT and REVOKE statements, and plan whether you will use ROLES or USERS/GROUPS or both
I would like to find out what the best practices are for managing developers' access to a sub-set of resources on a client's subscription?
I've searched Google and the Azure documentation looking for definitive answers, but I have yet to come across an article that puts it all together. Because Azure is still developing so rapidly I often find it difficult to determine whether a particular article may still be relevant.
To sum up our situation:
I've been tasked with researching and implementing the Azure infrastructure for a web site our company is developing for a client. At the moment our manager and I have access to the client's entire subscription on the Azure Portal by means of the Service Administrator's credentials, even though we're managing only:
Azure Cloud Service running a Web-Role (2-instances with Production and Staging environments).
Azure SQL Database.
Azure Blob Storage for deployments, diagnostics etc.
We're now moving into a phase where more of the developers in the team will require access to perform maintenance type tasks such as performing a VIP swap, retrieving diagnostic info etc.
What is the proper way to manage developer's access on such a project?
The approach I've taken was to implement Role Based Access Control (https://azure.microsoft.com/en-us/documentation/articles/role-based-access-control-configure/)
Move 1, 2, and 3 above into a new Resource Group according to http://blog.kloud.com.au/2015/03/24/moving-resources-between-azure-resource-groups/
Creating a new User Group for our company, say "GroupXYZ".
Adding the "GroupXYZ" to the Contributor role.
Adding the particular developer's company accounts to "GroupXYZ"
Motivation for taking the role-based approach
From what I understand giving everyone access as a Co-Administrator would mean that they have full access to every subscription in the portal.
Account-based authentication is preferable to certificate-based authentication due to the complexity added by managing the certificates.
What caused me to question my approach was the fact that I could not perform a VIP swap against the Cloud Service using PowerShell; I received an error message stating that a certificate could not be found.
Do such role-based accounts only have access to Azure by means of the Resource Manager Commandlets?
I had to switch PowerShell to the Azure Service Manager (ASM) Mode before having access to the Move-AzureDeployment commandlet.
Something else I'm not sure of is whether or not Visual Studio will have access to those resources (in the Resource Group) when using Role Based Access Control.
When you apply RBAC to Azure as you have or just in general, give access to an account via RBAC, then those accounts can only access Azure via the Azure Resource Manager APIs, whether that's PowerShell, REST or VS.
VS 2015 can access Azure resources via RBAC when using the 2.7 SDK. VS 2013 will have support for it soon.