gsutil compose error on bucket encrypted with customer managed keys - google-cloud-storage

I created a bucket encrypted with customer managed keys.
I'm able to copy, move and cat files but when I try to execute:
gsutil compose gs://bucket/first_file.csv gs://bucket/second_file.csv gs://bucket/final_file.csv
I get the following error:
BadRequestException: 400 Component object (bucket/first_file.csv) is encrypted with a Cloud KMS key, which is not supported.
I tried with service login and user with different rights but error is always the same.
Documentation [https://cloud.google.com/storage/docs/gsutil/commands/compose][1] mentions only number of files as limitation.
What I'm missing? Is there a limitation on KMS keys compatible with gsutil compose?

As the error suggest and documented here, the compose operation currently does not support objects that are encrypted with a customer managed key.
"NOT use customer-managed encryption keys."
So, since you are using gsutil compose, this is the reason why you are getting that error message.
I'd recommend you to use Google Managed keys to avoid this issue.

Related

Azure Devops SQL DacpacTask failing for Azure Key Vault

I'm trying to deploy a dacpac to an Azure Sql Database with Always encrypted enabled. The Devops agent is running in a self-hosted VM with sqlpackage.exe version 19 with build 16.0.5400.1 installed on it.
I've been able to trace down the issues by adding /diagnostics as an argument to the task and the exception that is raised is:
Unexpected exception executing KeyVault extension 'Object reference not set to an instance of an object.' at Microsoft.SqlServer.Dac.KeyVault.DacKeyVaultAuthenticator.Validate(IList`1 keyVaultUrls, CancellationToken cancelToken)
Anybody have a suggestion on how to solve this?
Please check below points
Microsoft.SqlServer.Dac.KeyVault.DacKeyVaultService Provides a service for discovering and configuring a Microsoft.SqlServer.Dac.KeyVault.KeyVaultAuthenticator to handle key vault access requests.
These requests will occur during deployment if an encrypted table is being altered. It also supports initialization of general key vault support in an application
If you store your column master keys in a key vault and you are using access policies for authorization:
Your application's identity needs the following access policy permissions on the key vault: get, unwrapKey, and verify.
A user managing keys for Always Encrypted needs the following access policy permissions on the key vault: create, get, list, sign, unwrapKey, wrapKey, verify.
SEE Create & store column master keys for Always Encrypted - SQL Server | Microsoft Docs
3.
To publish DAC package if Always Encrypted is set up in the DACPAC
or/and in the target database, you might need some or all of the below
permissions, depending on the differences between the schema in the
DACPAC and the target database schema.
ALTER ANY COLUMN MASTER KEY, ALTER ANY COLUMN ENCRYPTION KEY, VIEW ANY COLUMN > MASTER KEY DEFINITION, VIEW ANY COLUMN ENCRYPTION KEY
DEFINITION
we need to enable that Azure virtual machine check box
References:
Configure column encryption using Always Encrypted with a DAC package - SQL Server | Microsoft Docs
azure-sql-advanced-deployment-part4.
KeyVaultAuthenticator.Validate(IList, CancellationToken) >> Microsoft.SqlServer.Dac.KeyVault Namespace | Microsoft Docs
I managed to find a solution. I downgraded the sqlpackage.exe version. If I understand it correctly apparently version 19 seems to be targeted for SQL Server compatibility level 160 which is shipped with SQL Server 2022. When using version 18 it seems to be working with the current 150 that my Azure DB is set to.

Gcloud kms error: crypto_key_version.state: DESTROYED, but ENABLED is required

I'm trying to encrypt a secret with google kms like this:
gcloud kms encrypt --ciphertext-file=encrypted_secret --plaintext-file=secret --key very_secret_key --keyring=very_secret_ring --location=very secret_location
and get the following error:
ERROR: (gcloud.kms.encrypt) FAILED_PRECONDITION: The request cannot be fulfilled. Resource projects/amazing_project/locations/very_secret_location/keyRings/very_secret_keyring/cryptoKeys/very_secret_key/cryptoKeyVersions/1 has crypto_key_version.state: DESTROYED, but ENABLED is required.
Any input much appreciated since I can't find anything related to this issue in GCP docs
This error stands for key material not being stored anymore. Use another key or update this one
More info here:
https://cloud.google.com/kms/docs/reference/rpc/google.cloud.kms.v1#google.cloud.kms.v1.CryptoKeyVersion.CryptoKeyVersionState

Verify MongoDB encryption based on Local Key Management

I have configured MongoDB 3.4.16 Enterprise version for native encryption following the Local Key Management method as mentioned in the documentation of MongoDB.
I find that, as mentioned in the tutorial I also get the encryption successful message on the command prompt which comes after the operation was successful:
[initandlisten] Encryption key manager initialized with key file:
My question is, how can I demonstrate the results to other people that with just these configurations the encryption has happened? Like for example, only if I can show the DB data file before and after applying these encryption configurations.
I don't have an answer, rather a comment. Be sure to take note of the notice at the top of the Local Key Management page.
IMPORTANT
Using the keyfile method does not meet most regulatory key
management guidelines and requires users to securely manage their own
keys.
The safe management of the keyfile is critical.
Without a dedicated key manager to store and manage keys, it is like leaving the keys to your house under your welcome mat. Since you are on Enterprise edition, use KMIP and deploy an encryption key manager. More on encryption key management for MongoDB here: https://info.townsendsecurity.com/mongodb-encryption-key-management-definitive-guide

Google Cloud Storage 500 Internal Server Error 'Google::Cloud::Storage::SignedUrlUnavailable'

Trying to get Google Cloud Storage working on my app. I successfully saved an image to a bucket, but when trying to retrieve the image, I receive this error:
GCS Storage (615.3ms) Generated URL for file at key: 9A95rZATRKNpGbMNDbu7RqJx ()
Completed 500 Internal Server Error in 618ms (ActiveRecord: 0.2ms)
Google::Cloud::Storage::SignedUrlUnavailable (Google::Cloud::Storage::SignedUrlUnavailable):
Any idea of what's going on? I can't find an explanation for this error in their documentation.
To provide some explanation here...
Google App Engine (as well as Google Compute Engine, Kubernetes Engine, and Cloud Run) provides "ambient" credentials associated with the VM or instance being run, but only in the form of OAuth tokens. For most API calls, this is sufficient and convenient.
However, there are a small number of exceptions, and Google Cloud Storage is one of them. Recent Storage clients (including the google-cloud-storage gem) may require a full service account key to support certain calls that involve signed URLs. This full key is not provided automatically by App Engine (or other hosting environments). You need to provide one yourself. So as a previous answer indicated, if you're using Cloud Storage, you may not be able to depend on the "ambient" credentials. Instead, you should create a service account, download a service account key, and make it available to your app (for example, via the ActiveStorage configs, or by setting the GOOGLE_APPLICATION_CREDENTIALS environment variable).
I was able to figure this out. I had been following Rail's guide on Active Storage with Google Storage Cloud, and was unclear on how to generate my credentials file.
google:
service: GCS
credentials: <%= Rails.root.join("path/to/keyfile.json") %>
project: ""
bucket: ""
Initially, I thought I didn't need a keyfile due to this sentence in Google's Cloud Storage authentication documentation:
If you're running your application on Google App Engine or Google
Compute Engine, the environment already provides a service account's
authentication information, so no further setup is required.
(I am using Google App Engine)
So I commented out the credentials line and started testing. Strangely, I was able to write to Google Cloud Storage without issue. However, when retrieving the image I would receive the 500 server error Google::Cloud::Storage::SignedUrlUnavailable.
I fixed this by generating my private key and adding it to my rails app.
Another possible solution as of google-cloud-storage gem version 1.27 in August 2020 is documented here. My Google::Auth.get_application_default as in the documentation returned an empty object, but using Google::Cloud::Storage::Credentials.default.client instead worked.
If you get Google::Apis::ClientError: badRequest: Request contains an invalid argument response when signing check that you have dash in the project name in the signing URL (i.e projects/-/serviceAccounts explicit project name in the path is deprecated and no longer valid) and that you have "issuer" string correct, as the full email address identifier of the service account not just the service account name.
If you get Google::Apis::ClientError: forbidden: The caller does not have permission verify the roles your Service Account have:
gcloud projects get-iam-policy <project-name>
--filter="bindings.members:<sa_name>"
--flatten="bindings[].members" --format='table(bindings.role)'
=> ROLE
roles/iam.serviceAccountTokenCreator
roles/storage.admin
serviceAccountTokenCreator is required to call the signBlob service, and you need storage.admin to have ownership of the thing you need to sign. I think these are project global rights, I couldn't get it to work with more fine grained permissions unfortunately (i.e one app is admin for a certain Storage bucket)

Google Speech API returns 403 PERMISSION_DENIED

I have been using the Google Speech API to transcribe audio to text from my PHP app (using the Google Cloud PHP Client) for several months without any problem. But my calls have now started to return 403 errors with status "PERMISSION_DENIED" and message "The caller does not have permission".
I'm using the Speech API together with Google Storage. I'm authenticating using a service account and sending my audio data to Storage. That's working, the file gets uploaded. So I understand - but I might be wrong? - that "the caller" does not have permission to then read to the audio data from Storage.
I've been playing with permissions through the Google Console without success. I've read the docs but am quite confused. The service account I am using (I guess this is "the caller"?) has owner permissions on the project. And everything used to work fine, I haven't changed a thing.
I'm not posting code because if I understand correctly my app code isn't the issue - it's rather my Google Cloud settings. I'd be grateful for any idea or clarifications of concepts!
Thanks.
Being an owner of the project doesn't necessarily imply that the service account has read permission on the object. It's possible that the object was uploaded by another account that specified a private ACL or similar.
Make sure that the service account has access to the object by giving it the right permissions on the entire bucket or on the specific object itself.
You can do so using gsutil acl. More information and additional methods may be found in the official documentation.
For instance the following command gives READ permission on an object to your service account:
gsutil acl -r ch -u serviceAccount#domain.com:R gs://bucket/object
And this command gives READ permission on an entire bucket to your service account:
gsutil acl -r ch -u serviceAccount#domain.com:R gs://bucket
In google cloud vision,when your creating credentials with service account key, you have to create role and set it owner and accesses full permissions