Different behaviour in GKE between etcd and PV storage for same key action - kubernetes

GKE natively works with customer supplied key (using KMS), including actions like key rotation, key disabling/enabling for etcd / content in control plane.
While customer supplied key (using KMS) also works for encryption of dynamic PV mounts (using storage class),
it doesn't support actions like key rotation, key disabling/enabling.
For example, disabling the key has no effect on already mounted PV.
Why this difference? Are these two implementation drastically different?

According to the documentation on the Customer Supplied Encryption key.
If an object is encrypted using a customer-supplied encryption key, you can rotate the object's key by rewriting the object. Rewrites are supported through the JSON API, but not the XML API. See Rotating an encryption key for examples of key rotation.
You can also refer to the stackoverflow link.

Related

Is KMS data key pairs secure?

So, I'm building an application for MTLS authentication and generate X509 certificates using AWS ACM PCA and bundle them together with a private key in PKCS#12 format.
At the moment I generate key pairs programatically in Java which are never stored.
But since I'm not a security expert I thought maybe it's better to use AWS KMS for creating key pairs.
So, it seem like what I need is a CMK which can generate data key pairs which are stored in KMS.
If they're stored in KMS and I can fetch the private key at any time, how is that more secure than not storing it at all?
Or is the purpose of KMS only to store keys securely?
If you have a use for the encrypted private key that kms.generateDataKeyPair will provide, then it would be of use. It would also be a nice way to ensure that your keys are being generated securely (secure randomness, etc).
It’s important to note, KMS will not store the generated key pair. The idea is that you would store the plaintext public key, and the encrypted private key, and call kms.decrypt to turn the encrypted private key into plaintext whenever you need it.

Is exposing Pulumi ecnryptedKey safe?

I have a stack that utilizes AWS KMS key for, I believe, secrets and state encryption in Pulumi stack configuration file Pulumi..yaml
Is it safe to expose this in a public repository? As I understand secrets are stored within stack configuration files as well in encrypted form; would it be reasonably safe to expose those as well in a public repository?
How exactly this key is generated and what are the inner mechanics behind secrets management in Pulumi?
Yes, exposing these values in your code is complete safe.
The key is asymmetrically encrypted using your key provider, in this case an AWS KMS key, its only possible to retrieve the value if someone has access to the AWS KMS key itself to decrypt the value, and even then, is a bit of a hoop jumping exercise.
I expose these values myself in source control, so you should be absolutely okay to leave them in your repo

CMK usage in ADF pipelines

I have created CMK in ADF. I dont know how to consume CMK in ADF, either pipelines or dataflow.
When tried to add the pipeline, I wont get option to select the CMK. Any information is helpful.
Let's review the entire process.
First we can use to Azure key vault generate a RSA 2048 key.
Add the key to the Azure Data Factory.
By default, data is encrypted with a randomly generated Microsoft-managed key. After we generate a RSA 2048 key via key vault, the data will be encrypted by Customer-managed keys(CMK).
According to this document, we can know Customer-managed keys(CMK) is a type of Server-side encryption. It gives us control over the keys, including Bring Your Own Keys (BYOK) support, or allows you to generate new ones.
Similar to transparent Data Encryption with customer managed keys in Azure SQL Database.
So we can conclude that Customer-managed keys(CMK) is transparent to users.This encryption only acts on the network layer or the transport layer. In other words, there is no different to us.

AWS KMS storing customer master key

I know I'm missing something here but I'm struggling to understand the customer master key concept in AWS KMS. Below is the sample code.
Code to create master key:
`CreateKeyRequest req = new CreateKeyRequest();
CreateKeyResult result = kmsClient.createKey(req);
String customerMasterKey = result.getKeyMetadata().getKeyId();`
Code to create data key using customer master key:
`GenerateDataKeyRequest dataKeyRequest = new GenerateDataKeyRequest();
dataKeyRequest.setKeyId(customerMasterKey);
dataKeyRequest.setKeySpec("AES_128");
GenerateDataKeyResult dataKeyResult = kmsClient.generateDataKey(dataKeyRequest);`
Now as per my understanding, I need to use the master key to decrypt the encrypted data key every time I want to encrypt/decrypt some thing. Which means I need to store these two keys in some location. So if someone else can get access to these two keys, can they be able to decrypt my data using AWS encryption SDK?
The master key never leaves AWS and is only accessible by someone with the appropriate access to your account and the key. If they have access to your account and with the appropriate rights to use the key then they can use the master key to encrypt/decrypt your data key. Remember the master key ID is not the actual key, therefore, being in possession of the key ID is not useful outside of the AWS.
You do not store both keys, the master key ID will always be viewable using the console, CLI or SDK(I assume since I have not used it).
The data key is not managed by the KMS service, therefore, you'll have to store it(after encrypting it with the master key) along with the encrypted data.
The answer to your question is... if it happens that an unauthorised individual has a copy of your master key ID and your encrypted data key, there's no way they can use that master key unless they also have access to your AWS user credentials with the appropriate rights to use that master key.

is it possible to copyObject from one cloud object storage instance to another. The buckets are in different regions

I would like to use the node sdk to implement a backup and restore mechanism between 2 instances of Cloud Object Storage. I have added a service ID to the instances and added a permissions for the service id to access the buckets present in the instance i want to write to. The buckets will be in different regions. I have tried a variety of endpoints both legacy and non-legacy private and public to achieve this but i usually get Access Denied.
Is what I am trying to do possible with the sdk? if so can someone point me in the right direction?
var config = {
"apiKeyId": "xxxxxxxxxxxxxxxxxxxxxxx-xxxxxxxxxxxxxxxxxxx",
"endpoint": "s3.eu-gb.objectstorage.softlayer.net",
"iam_apikey_description": "Auto generated apikey during resource-key operation for Instance - crn:v1:bluemix:public:cloud-object-storage:global:a/xxxxxxxxxxx:xxxxxxxxxxx::",
"iam_apikey_name": "auto-generated-apikey-xxxxxxxxxxxxxxxxxxxxxx",
"iam_role_crn": "crn:v1:bluemix:public:iam::::serviceRole:Writer",
"iam_serviceid_crn": "crn:v1:bluemix:public:iam-identity::a/0xxxxxxxxxxxxxxxxxxxx::serviceid:ServiceIdxxxxxxxxxxxxxxxxxxxxxx",
"serviceInstanceId": "crn:v1:bluemix:public:cloud-object-storage:global:a/xxxxxxxxxxxxxxxxxxx:xxxxxxxxxxxxxxxxxxxxxxxxxx::",
"ibmAuthEndpoint": "iam.cloud.ibm.com/oidc/token"
}
This should work as long as you are able to properly grant the requesting user access to be able to read the source of the put-copy, so long as you are not using KeyProtect based keys.
So the breakdown here is a bit confusing due to some unintuitive terminology.
A service instance is a collection of buckets. The primary reason for having multiple instances of COS is to have more granularity in your billing, as you'll get a separate line item for each instance. The term is a bit misleading, however, because COS is a true multi-tenant system - you aren't actually provisioning an instance of COS, you're provisioning a sort of sub-account within the existing system.
A bucket is used to segment your data into different storage locations or storage classes. Other behavior, like CORS, archiving, or retention, acts on the bucket level as well. You don't want to segment something that you expect to scale (like customer data) across separate buckets, as there's a limit of ~1k buckets in an instance. IBM Cloud IAM treats buckets as 'resources' and are subject to IAM policies.
Instead, data that doesn't need to be segregated by location or class, and that you expect to be subject to the same CORS, lifecycle, retention, or IAM policies can be separated by prefix. This means a bunch of similar objects share a path, like foo/bar and foo/bas have the same prefix foo/. This helps with listing and organization but doesn't provide granular access control or any other sort of policy-esque functionality.
Now, to your question, the answer is both yes and no. If the buckets are in the same instance then no problem. Bucket names are unique, so as long as there isn't any secondary managed encryption (eg Key Protect) there's no problem copying across buckets, even if they span regions. Keep in mind, however, that large objects will take time to copy, and COS's strong consistency might lead to situations where the operation may not return a response until it's completed. Copying across instances is not currently supported.