I want to use WAL-E to backup my PostgreSQL data to Amazon S3. I am trying to determine if the data is encrypted in transit from Postgres to S3 using SSL/TLS but the documentation is not entirely clear on this point. I see WAL-E uses boto library and I believe by default it uses SSL/TLS but can anybody confirm or tell me how to configure WAL-E to ensure it does use SSL/TLS?
HTTPS is the default, but you can manually specify the S3 endpoint to force the protocol.
https://github.com/wal-e/wal-e#manually-specifying-the-s3-endpoint
The format is that of:
protocol+convention://hostname:port Where valid protocols are http and
https, and conventions are path, virtualhost, and subdomain.
Example:
Turns off encryption and specifies us-west-1 endpoint.
WALE_S3_ENDPOINT=http+path://s3-us-west-1.amazonaws.com:80
For radosgw.
WALE_S3_ENDPOINT=http+path://hostname
As seen when using Deis, which uses radosgw.
WALE_S3_ENDPOINT=http+path://deis-store-gateway:8888
Related
This question is about infosec, data privacy, specifically HIPAA compliance on GCP.
Is there any advantages for self managing Postgres server (built on GCP Compute instances using lets say Terraform) my own Vs using the managed offering, i,e. Cloud SQL
Thanks in advance
Google Cloud SQL Postgres is a fully managed option for deploying PostgreSQL to Google Cloud. The fully managed option is convenient, but is mainly suitable for cloud-native applications, or applications rebuilt for the cloud.
It has Built-in encryption for database tables, temporary files, backups, and any data transferred over Google’s internal networksSecure connections via SSL/TLS or the Cloud SQL Proxy.
Update1
As you are referring to HIPAA You can check this guide for HIPAA Compliance on Google Cloud Cloud sql encrypts the data at rest using the 256-bit Advanced Encryption Standard (AES-256), or better, with symmetric keys: that is, the same key is used to encrypt the data when it is stored, and to decrypt it when it is used. You can use your own encryptions as well with CMEK for cloud sql
And also you mentioned Infosec. I have not completely understood the term. I assume that you are referring to securing information from vulnerabilities. You can use Cloud Armor, which is a network security service that provides defenses against DDoS and application attacks like cross-site scripting (XSS) and SQL injection (SQLi).
Self hosted Postgres gives you full control over your PostgreSQL database on GCP, letting you to fine-tune server parameters, modify database configuration, and tune performance, just like in a local deployment.
Update2
As per this thread, it seems like postgresql is not HIPAA compliant.
For Encryption at rest on postgresql use can PostgreSQL TDE and Pgcrypto as discussed in this similar thread
For self hosted postgres You can also use shielded VM using which you can protect enterprise workloads from threats like remote attacks, privilege escalation, and malicious insiders
I am not sure on your application requirement, But based upon my
understanding about both cloud sql and self hosted postgres I
would recommend considering cloud sql as the best option as it is
fully managed by google and also complies with HIPAA and encryption.
For more information about pros and cons of Google Cloud SQL Postgres and Self hosted Postgres, Check this document
I have a enterprise level Kafka hosted in AWS Cluster. I'm trying to consume a topic from AWS Cluster. I need to use SSL protocol for connecting to servers.
From documentation i found that, i need to enable few properties,
ssl.keystore.password=test1234
ssl.key.password=test1234
ssl.truststore.location=/var/private/ssl/kafka.server.truststore.jks
ssl.truststore.password=test1234
I have a problem here, i cannot store the keystore.jks and truststore.jks in the source. Our security does not allow storing sensitive data in the source.
Instead I have a encrypted keystore file, which I pull it from Vault.
Is there a possibility that i can use this encrypted keystore? I don't see such feasibility in the documentation.
A local file path is needed for SSL certificates. You'll need a wrapper script before your code starts (or before the main method consumer logic) that will download the necessary information from Vault and fills in the properties files/configs
Or use a library that handles this, such as Spring Vault
I'm reading docs about how to use google cloud, particularly to store data on a bucket.
I can see the gcloud scp command to upload file to a VM in a secure way (highlighted in the doc).
To uload to a bucket, it's said to use gsutil cp
Is this command secure ? If I want to upload sensitive data, do I have to take more precautions (and how)
As per the documentation:
By default, gsutil accesses Cloud Storage through JSON API request endpoints. You can change this default to the XML API.
The JSON API request endpoint is HTTPS - so assuming the security provided by HTTPS is sufficient for your needs, it should be fine. That won't guard against attacks if your local machine has been compromised with a bogus version of gsutil, but at that point all bets are probably off.
Is it possible to use AWS KMS and a tool like credstash without the use of EC2 or equivalent or does it rely solely on IAM roles?
I've got a server elsewhere where I am testing some things out and ultimately I will be looking at migrating an app to EC2 etc. to make use of scaling. But for now whilst I'm setting up my deployment pipeline etc. I wondered if it was still possible to make use of KMS on my non-aws provisioned server?
The only possible way I can think of is by installing the AWS CLI tools on the server in question. Does this sounds like the right approach?
What #Viccari said is correct (in the comments). In terms of what you want to do (store passwords), the AWS Parameter Store would be a good fit for you. See https://docs.aws.amazon.com/systems-manager/latest/userguide/systems-manager-paramstore.html for more information. The guide explicitly calls out your use-case:
Parameter Store offers the following benefits and features.
Use a secure, scalable, hosted secrets management service (No servers to manage).
In the end, if you end up using Parameter Store or KMS, you will need some sort of credentials stored somewhere to grab an AWS STS token to use to call the underlying AWS services. If working outside of AWS EC2, you will need the AWS Access Key and AWS Secret Key from an IAM user. If you are in EC2, the IAM instance role will magically provide you the credentials and use that role to call those AWS services. The AWS SDK does this for you behind the scenes.
But, as you state, you don't want to run this in EC2 (to save money, or other reasons). The quickest way to store these credentials is to have them in a un-tracked file (added to your .gitignore) you can source from as environment variables, which your program will then read. This allows you to do local testing, and easily run it in EC2
with zero code changes. See https://docs.aws.amazon.com/cli/latest/userguide/cli-environment.html for what variables to set. Note that this doc talks about the CLI; the SDK's follow the same behavior.
Spring Boot - 2.0.0.M3
Spring cloud - Finchley.M1
I want to know if someone is using Spring Cloud config server with both vault and git support in a production setup using Database storage backend.
I have evaluated Spring cloud config using vault and contemplating whether to go for Oracle JCE to encrypt username/pwd or Vault and seek suggestions on the same. we are working on Springboot/microservices.
Following are my findings -
Vault will introduce an additional layer and thus will introduce additional usecases of security, auditing while communicating with Vault.
Spring cloud Config actuator endpoints are broken for the milestone release at this point for generation of encrypted values and /encrypt /decrypt may not work if we go for Oracle JCE support so we generate encrypted values through stable versions.
We do not wish to use consul server and are trying to use Cassandra as Storage backend.
I used Vault Authentication backend using AppRole and generated a Token (different from root token as it's unsafe to use the same) with read permissions. However, Spring Cloud config at the moment support only Token based authentication from client side. That means we first generate token from Vault and then pass it as commandline/env variable.
Some additional points of concern are expiry of token (though we can have non-expiry token not sure about pros/cons), restarts, safety issues, instantiating new microservices. There is no provision of dynamic tokens/authentication at cloud config side.
For milestone release i found that the client side encryption/decryption is not working as of now using recommended inclusion of RSA jar. Here is the ticket i opened.
https://github.com/spring-cloud/spring-cloud-config/issues/805#issuecomment-332491536
These are some of my observations, please share your thoughts if there is any case study/whitepaper that address spring cloud config vault usecases, setup and challenges for production micro-services environment.
Thanks
Thanks for reaching out to me. One think I would state is that the App Role backend utilizes two distinct tokens, and indeed spring-cloud-config-vault does indeed support this functionality, see: http://cloud.spring.io/spring-cloud-vault/single/spring-cloud-vault.html#_approle_authentication. I leverage vault in the same way I leverage config server, as per the documentation. I don't encrypt any values in my config, I just don't put them there. I put the secret values in vault and let it serve config. As long as keys don't collide, you don't have to mess with anything, otherwise you may need adjust the priority so vault wins, again see the documentation that I pointed to above. I wouldn't mess with encryption/decryption in spring-cloud-config personally. Because you have to check the keys into SCM or distribute them to your teams for local development, you lose the value of having these keys IMO.
Thanks Spring Cloud vault does support but not Spring cloud config with Vault. Only way seems to be passing X-Config-token from Microservice to Config Server. We are bit skeptical with this part of generating tokens manually or through script. Especially with containerization and when new MS instances will be spawn. Not sure about this approach especially in production setup.