from grails application I would like to create a blob in bucket.
I already created bucket in google cloud, created service account and gave owner access to the bucket to the same service account. Later created service account key project-id-c4b144.json and it holds all the credentials.
StorageOptions storageOptions = StorageOptions.newBuilder()
.setCredentials(ServiceAccountCredentials
.fromStream(new FileInputStream("/home/etibar/Downloads/project-id-c4b144.json"))) // setting credentials
.setProjectId("project-id") //setting project id, in reality it is different
.build()
Storage storage=storageOptions.getService()
BlobId blobId = BlobId.of("dispatching-photos", "blob_name")
BlobInfo blobInfo = BlobInfo.newBuilder(blobId).setContentType("text/plain").build()
Blob blob = storage.create(blobInfo, "Hello, Cloud Storage!".getBytes(StandardCharsets.UTF_8))
When I run this code, I get a json error message back.
Caused by: com.google.api.client.googleapis.json.GoogleJsonResponseException: 403 Forbidden
{
"code" : 403,
"errors" : [ {
"domain" : "global",
"message" : "Caller does not have storage.objects.create access to bucket dispatching-photos.",
"reason" : "forbidden"
} ],
"message" : "Caller does not have storage.objects.create access to bucket dispatching-photos."
}
| Grails Version: 3.2.10
| Groovy Version: 2.4.10
| JVM Version: 1.8.0_131
google-cloud-datastore:1.2.1
google-auth-library-oauth2-http:0.7.1
google-cloud-storage:1.2.2
Concerning the service account that json file corresponds to -- I'm betting either:
A) the bucket you're trying to access is owned by a different project than the one where you have that account set as a storage admin
or B) you're setting permissions for a different service account than what that json file corresponds to
Related
I am trying to ingest data from MySQL to BigQuery. I am using Debezium components running on Docker for this purpose.
Anytime I try to deploy the BigQuery sink connector to Kafka connect, I am getting this error:
{"error_code":400,"message":"Connector configuration is invalid and contains the following 2 error(s):\nFailed to construct GCS client: Failed to access JSON key file\nAn unexpected error occurred while validating credentials for BigQuery: Failed to access JSON key file\nYou can also find the above list of errors at the endpoint `/connector-plugins/{connectorType}/config/validate`"}
It shows it's an issue with the service account key (trying to locate it)
I granted the service account BigQuery Admin and Editor permissions, but the error persists.
This is my BigQuery connector configuration file:
{
"name": "kcbq-connect1",
"config": {
"connector.class": "com.wepay.kafka.connect.bigquery.BigQuerySinkConnector",
"tasks.max" : "1",
"topics" : "kcbq-quickstart1",
"sanitizeTopics" : "true",
"autoCreateTables" : "true",
"autoUpdateSchemas" : "true",
"schemaRetriever" : "com.wepay.kafka.connect.bigquery.retrieve.IdentitySchemaRetriever",
"schemaRegistryLocation":"http://localhost:8081",
"bufferSize": "100000",
"maxWriteSize":"10000",
"tableWriteWait": "1000",
"project" : "dummy-production-overview",
"defaultDataset" : "debeziumtest",
"keyfile" : "/Users/Oladayo/Desktop/Debezium-Learning/key.json"
}
Can anyone help?
Thank you.
I needed to mount the service account key in my local directory to the Kafka connect container. That was how I was able to solve the issue. Thank you :)
I just created an Azure Function that should connect to my instance of MongoDB on Atlas, basically following this tutorial:
https://www.mongodb.com/blog/post/how-to-integrate-azure-functions-with-mongodb
From my local development with Visual Studio, everything works fine and I can connect to the Atlas environment, but when I deploy the code on Azure, the following exception raises:
A timeout occurred after 30000ms selecting a server using CompositeServerSelector{ Selectors = MongoDB.Driver.MongoClient+AreSessionsSupportedServerSelector, LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000 }, OperationsCountServerSelector }. Client view of cluster state is { ClusterId : "1", ConnectionMode : "ReplicaSet", Type : "ReplicaSet", State : "Disconnected", Servers : [{ ServerId: "{ ClusterId : 1, EndPoint : "Unspecified/ltdevcluster-shard-00-00.qkeby.mongodb.net:27017" }", EndPoint: "Unspecified/ltdevcluster-shard-00-00.qkeby.mongodb.net:27017", ReasonChanged: "Heartbeat", State: "Disconnected", ServerVersion: , TopologyVersion: , Type: "Unknown", HeartbeatException: "MongoDB.Driver.MongoConnectionException: An exception occurred while opening a connection to the server.
---> MongoDB.Driver.MongoConnectionException: An exception occurred while receiving a message from the server.
---> System.IO.EndOfStreamException: Attempted to read past the end of the stream.
If I instead set the Network Access to "everywhere", again everything works fine.
Now, in the Network Access panel of Atlas, I added the IPs retrieved from the Azure Portal under my function app => Networking => Inbound traffic and Outbound traffic (a total of 1 IP for inbound and 3 IPs for outbound).
But adding those 4 IPs has not solved the issue.
What else should I do?
If you are using static IP, for a workaround, you can check this: How do I set a static IP in Functions?
You can also set up a Private Endpoint and for the security of the database credentials check secrets engine integration using vault.
You can refer to How to connect Azure Function with MongoDB Atlas ,Azure functions unable to connect with Mongo Db Atlas M10 and How to connect Azure Function with MongoDB Atlas
I was using strap 3.0.0.next-11 and then migrated my APIs to 3.6.8 version.
In 3.6.8 i see this error in a pop-up , for collections which has relations:
An error occurred during models config fetch.
on logs i see this error :
Cast to ObjectId failed for value "http://54.179.156.135:1339/uploads/d26af51633f2451a934896bfc125ec90.jpg" at path "_id" for model "file"
Why is this happening on 3.6.8 ? I have been using the older version without any issues and with this new version I am unable to feth anything.
I am using following :
node : 14.17.6 (LTS)
npm : 6.14.15
strapi : 3.6.8
I have also attached the image of my package.json.
So I figured out the reason why this was happening in my case. After migrating to 3.6.8 , the fields in model which have type :
"thumbnail": {
"model": "file",
"via": "related",
"plugin": "upload"
}
need to have values stored as ObjectId in database as a reference to an entry in upload_file which is maintained by strapi internally.
earlier, thumbnail would store value as a string url ( url of the image ).
Example :
thumbnail : https://my_image_url_path/img.jpg
Now , thumbnail stores the reference i.e. ObjectId , which refers to an entry in the upload_file collection which is responsible for maintaining all the images uploaded via strapi upload api.
Example :
thumbnail : ObjectId("60f53bf69f811d268d8fedb1")
I have the following copy command:
copy ink.contact_trace_records
from 's3://mybucket/mykey'
iam_role 'arn:aws:iam::accountnum:role/rolename'
json 'auto';
Where the role has the full access to S3 and full access to RS policies (I know this is not a good idea, but I'm just losing it here :) ). The cluster has the role attached. The cluster has VPC enhanced routing. I get the following:
[2019-05-28 14:07:34] [XX000][500310] [Amazon](500310) Invalid operation: S3ServiceException:Access Denied,Status 403,Error AccessDenied,Rid 370F75618922AFC0,ExtRid dp2mcnlofFzt4dz00Lcm188/+ta3OEKVpuFjnZSYsC0pJPMiULk7I6spOpiZwXc04VVRdxizIj4=,CanRetry 1
[2019-05-28 14:07:34] Details:
[2019-05-28 14:07:34] -----------------------------------------------
[2019-05-28 14:07:34] error: S3ServiceException:Access Denied,Status 403,Error AccessDenied,Rid 370F75618922AFC0,ExtRid dp2mcnlofFzt4dz00Lcm188/+ta3OEKVpuFjnZSYsC0pJPMiULk7I6spOpiZwXc04VVRdxizIj4=,CanRetry 1
[2019-05-28 14:07:34] code: 8001
[2019-05-28 14:07:34] context: S3 key being read : s3://redacted
[2019-05-28 14:07:34] query: 7794423
[2019-05-28 14:07:34] location: table_s3_scanner.cpp:372
[2019-05-28 14:07:34] process: query3_986_7794423 [pid=13695]
[2019-05-28 14:07:34] -----------------------------------------------;
What am I missing here? The cluster has full access to S3, it is not even in a custom VPC, it is in the default one. Thoughts?
Check the object owner. The object is likely owned by another account. This is the reason for S3 403s in many situations where objects cannot be accessed by a role or account that has full permissions. The typical indicator is that you can list the object(s) but cannot get or head them.
In the following example I'm trying to access a Redshift audit log from a 2nd account. The account 012345600000 is granted access to the bucket owned by 999999999999 using the following policy:
{"Version": "2012-10-17",
"Statement": [
{"Action": [
"s3:Get*",
"s3:ListBucket*"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::my-audit-logs",
"arn:aws:s3:::my-audit-logs/*"
],
"Principal": {
"AWS": [
"arn:aws:iam::012345600000:root"
]}}]
}
Now I try to list (s3 ls) then copy (s3 cp) a single log object:
aws --profile 012345600000 s3 ls s3://my-audit-logs/AWSLogs/999999999999/redshift/us-west-2/2019/05/25/
# 2019-05-28 14:49:46 376 …connectionlog_2019-05-25T19:03.gz
aws --profile 012345600000 s3 cp s3://my-audit-logs/AWSLogs/999999999999/redshift/us-west-2/2019/05/25/…connectionlog_2019-05-25T19:03.gz ~/Downloads/…connectionlog_2019-05-25T19:03.gz
# fatal error: An error occurred (403) when calling the HeadObject operation: Forbidden
Then I check the object ownership from the account that owns the bucket.
aws --profile 999999999999 s3api get-object-acl --bucket my-audit-logs --key AWSLogs/999999999999/redshift/us-west-2/2019/05/25/…connectionlog_2019-05-25T19:03.gz
# "Owner": {
# "DisplayName": "aws-cm-prod-auditlogs-uswest2", ## Not my account!
# "ID": "b2b456ce30a967fb1877b3c9594773ae0275fee248e3ebdbff43d66907b89144"
# },
I then copy the object in the same bucket with --acl bucket-owner-full-control. This makes me the owner of the new object. Note the changed SharedLogs/ prefix.
aws --profile 999999999999 s3 cp \
s3://my-audit-logs/AWSLogs/999999999999/redshift/us-west-2/2019/05/25/…connectionlog_2019-05-25T19:03.gz \
s3://my-audit-logs/SharedLogs/999999999999/redshift/us-west-2/2019/05/25/…connectionlog_2019-05-25T19:03.gz \
--acl bucket-owner-full-control
Now I can download (or load to Redshift!) the new object that is shared from the same bucket.
aws --profile 012345600000 s3 cp s3://my-audit-logs/SharedLogs/999999999999/redshift/us-west-2/2019/05/25/…connectionlog_2019-05-25T19:03.gz ~/Downloads/…connectionlog_2019-05-25T19:03.gz
# download: …
I'm using the Cloud 9 IDE to develop an application using MongoDB. I created a database called "appdata" at MongoLab and the following user:
{
"_id": "appdata.admin",
"user": "admin",
"db": "appdata",
"credentials": {
"SCRAM-SHA-1": {
"iterationCount": 10000,
"salt": "K/WUzUDbi3Ip4Vy59gNV7g==",
"storedKey": "9ow35+PtcOOhfuhY7Dtk7KnfYsM=",
"serverKey": "YfsOlFx1uvmP+VaBundvmVGW+3k="
}
},
"roles": [
{
"role": "dbOwner",
"db": "appdata"
}
]
}
Whenever I try connecting to the database through Cloud 9 Shell using the following command (given by MongoLab with my newly created user):
mongo ds057244.mongolab.com:57244/appdata -u admin -p admin
I get the following error message:
MongoDB shell version: 2.6.11
connecting to: ds057244.mongolab.com:57244/appdata
2015-11-22T05:23:49.015+0000 Error: 18 { ok: 0.0, errmsg: "auth failed",
code: 18 } at src/mongo/shell/db.js:1292
exception: login failed
Also, on my javascript file running on Cloud 9, while following this tutorial (which uses mongoose to access the DB) I got stuck on the post route for bears. Whenever I send a post request through postman with the specified fields set, the route doesn't return anything, neither a bear created nor an error message, which makes me think the problem is also failing to login to the database. The previous get request is working just fine and my code is the exactly same as the tutorial.
Does anyone know what the problem in any of the cases and what I need to do to solve them?
The shell problem was fixed updating it to the Database version (which was 3.0.3).
For the javascript files, I restarted the tutorial and made sure I downloaded all necessary dependencies with the most recent stable version (not the ones shown on the tutorial), after that the problem was solved.