I can't find some sort of list that would show accepted types for outputs in Tekton Tasks.
Is it somehow fixed or is it possible to use any file extension? I have been having troubles with .xml files in my case.
Thanks in advance
Actually, the tekton's output do not care what the file extension, all files output belong to type: storage.
validate types of outputs should be:
// PipelineResourceTypeGit indicates that this source is a GitHub repo.
PipelineResourceTypeGit PipelineResourceType = "git"
// PipelineResourceTypeStorage indicates that this source is a storage blob resource.
PipelineResourceTypeStorage PipelineResourceType = "storage"
// PipelineResourceTypeImage indicates that this source is a docker Image.
PipelineResourceTypeImage PipelineResourceType = "image"
// PipelineResourceTypeCluster indicates that this source is a k8s cluster Image.
PipelineResourceTypeCluster PipelineResourceType = "cluster"
// PipelineResourceTypePullRequest indicates that this source is a SCM Pull Request.
PipelineResourceTypePullRequest PipelineResourceType = "pullRequest"
// PipelineResourceTypeCloudEvent indicates that this source is a cloud event URI
PipelineResourceTypeCloudEvent PipelineResourceType = "cloudEvent"
// PipelineResourceTypeGCS is the subtype for the GCSResources, which is backed by a GCS blob/directory.
PipelineResourceTypeGCS PipelineResourceType = "gcs"
Related
I am learning terraform deployments coupled with GCP to streamline deployments.
I have successfully deployed a postgreSQL db.
Now I am trying to utilize terraform outputs to write a the private ip generated by the postgreSQL DB server to the output directory where terraform is initiated from.
What is not clear to me is:
(1) The output is defined within the same main.tf file?
(2) Where is the output parameters referenced from? I cannot find the documentation to properly aline. Such I keep getting the error upon applying: Error: Reference to undeclared resource
My main.tf looks like this
resource "google_sql_database_instance" "main" {
name = "db"
database_version = "POSTGRES_12"
region = "us-west1"
settings {
availability_type = "REGIONAL"
tier = "db-custom-2-8192"
disk_size = "10"
disk_type = "PD_SSD"
disk_autoresize = "true"
}
}
output "instance_ip_addr" {
value = google_sql_database_instance.private_network.id
description = "The private IP address of the main server instance."
}
As for the code style, usually there would be a separate file called outputs.tf where you would add all the values you want to have outputted after a successful apply. The second part of the question is two-fold:
You have to understand how references to resource attributes/arguments work [1][2]
You have to reference the correct logical ID of the resource, i.e., the name you assigned to it, followed by the argument/attribute [3]
So, in your case that would be:
output "instance_ip_addr" {
value = google_sql_database_instance.main.private_ip_address # <RESOURCE TYPE>.<NAME>.<ATTRIBUTE>
description = "The private IP address of the main server instance."
}
[1] https://www.terraform.io/language/expressions/references#references-to-resource-attributes
[2] https://www.terraform.io/language/resources/behavior#accessing-resource-attributes
[3] https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/sql_database_instance#attributes-reference
To reference an attribute of a resource, you should put something like:
[resource type].[resource name].[attribute]
In this case, the output should be:
output "instance_ip_addr" {
value = google_sql_database_instance.main.private_ip_address
description = "The private IP address of the main server instance."
}
The output attributes are listed in the documentation. It's fine to put that in main.tf.
I'm trying to create a bucket in cloud object storage using python. I have followed the instructions in the API docs.
This is the code I'm using
COS_ENDPOINT = "https://control.cloud-object-storage.cloud.ibm.com/v2/endpoints"
# Create client
cos = ibm_boto3.client("s3",
ibm_api_key_id=COS_API_KEY_ID,
ibm_service_instance_id=COS_INSTANCE_CRN,
config=Config(signature_version="oauth"),
endpoint_url=COS_ENDPOINT
)
s3 = ibm_boto3.resource('s3')
def create_bucket(bucket_name):
print("Creating new bucket: {0}".format(bucket_name))
s3.Bucket(bucket_name).create()
return
bucket_name = 'test_bucket_442332'
create_bucket(bucket_name)
I'm getting this error - I tried setting CreateBucketConfiguration={"LocationConstraint":"us-south"}, but it doesnt seem to work
"ClientError: An error occurred (IllegalLocationConstraintException) when calling the CreateBucket operation: The unspecified location constraint is incompatible for the region specific endpoint this request was sent to."
Resolved by going to https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-endpoints#endpoints
And choosing the endpoint specific to the region I need. The "Endpoint" provided with the credentials, is not the actual endpoint.
I created a file with the following Node.js code:
const {Storage} = require('#google-cloud/storage')
var gcs = new Storage()
var bucket = gcs.bucket('bucket-name')
const file = bucket.file('filename')
// fileData is a utf8 buffer
file.save(fileData, function(err) {
console.log('Error:' + err)
})
Then, I went in through the Cloud Console and deleted the file.
I then ran the code above again, but received the error "[service account] does not have storage.objects.delete access to bucket-name/filename." So I went in and added storage.objects.delete access to the service account through IAM, but I continue to get the error.
It seems that the object is still sitting inside the bucket, and it still has the old service account access (without storage.objects.delete), but I don't see the object anywhere. Versioning is suspended on this bucket.
I have since gone through the same steps with the same bucket but using a different filename and don't see the error message. This seems to show that the new service account access is being properly applied to new files, but not to old files. This is surprising, since I'm using "Bucket Policy Only" on this bucket.
Can anyone figure out how to fix this? Thanks!
Cloud Storage Object metadata
1. await bucket
.file(filePath)
.delete({ ignoreNotFound: true });
// Deleting file with a name.
const blob = bucket.file(filePath);
2. await blob.save(fil?.buffer);
//Saving File with the same name
3. const [metadata] = await storage
.bucket(bucketName)
.file(filePath)
.getMetadata();
newDocObj.location = metadata.mediaLink;
I have used metadata.mediaLink to get the latest download link
of the uploaded file from Google Bucket Storage.
I am trying to download a file for the first time from Google Cloud Storage.
I set the path to the googstruct.json service account key file that I downloaded from https://cloud.google.com/storage/docs/reference/libraries#client-libraries-usage-python
Do need to set the authorization to Google Cloud outside the code somehow? Or is there a better "How to use Google Cloud Storage" then the one on the google site?
It seems like I am passing the wrong type to the storage_client = storage.Client()
the exception string is below.
Exception has occurred: google.auth.exceptions.DefaultCredentialsError
The file C:\Users\Cary\Documents\Programming\Python\QGIS\GoogleCloud\googstruct.json does not have a valid type.
Type is None, expected one of ('authorized_user', 'service_account').
MY PYTHON 3.7 CODE
from google.cloud import storage
import os
os.environ["GOOGLE_APPLICATION_CREDENTIALS"]="C:\\GoogleCloud\\googstruct.json"
# Instantiates a client
storage_client = storage.Client()
bucket_name = 'structure_ssi'
destination_file_name = "C:\\Users\\18809_PIPEM.shp"
source_blob_name = '18809_PIPEM.shp'
download_blob(bucket_name, source_blob_name, destination_file_name)
def download_blob(bucket_name, source_blob_name, destination_file_name):
"""Downloads a blob from the bucket."""
storage_client = storage.Client()
bucket = storage_client.get_bucket(bucket_name)
blob = bucket.blob(source_blob_name)
blob.download_to_filename(destination_file_name)
print('Blob {} downloaded to {}.'.format(
source_blob_name,
destination_file_name
)
)
I did look at this but I cannot tell if this is my issue. I have tried both.
('Unexpected credentials type', None, 'Expected', 'service_account') with oauth2client (Python)
This error means that the Json Service Account Credentials that you are trying to use C:\\GoogleCloud\\googstruct.json are corrupt or the wrong type.
The first (or second) line in the file googstruct.json should be "type": "service_account".
Another few items to improve your code:
You do not need to use \\, just use / to make your code easier
and cleaner to read.
Load your credentials directly and do not modify environment
variables:
storage_client = storage.Client.from_service_account_json('C:/GoogleCloud/googstruct.json')
Wrap API calls in try / except. Stack traces do not impress customers. It is better to have clear, simple, easy to read error messages.
I want to add metadata to Minio object while adding the file as object to Minio object storage using python. I am able to find accessing metadata of object stored on Minio. but there is no example of adding metadata while adding file to Minio storage.
Regards,
Ritu Ranjan
Well it there is a examples at python minio client test
content_type='application/octet-stream'
metadata = {'x-amz-meta-testing': 'value'}
client.put_object(bucket_name,
object_name+'-metadata',
MB_11_reader,
MB_11,
content_type,
metadata)
The trick is that metadata dict should have keys in format
'x-amz-meta-youkey'
You can use pyminio:
from pyminio import Pyminio
pyminio_client = Pyminio.from_credentials(
endpoint='<your-minio-endpoint>', # e.g. "localhost:9000/"
access_key='<your-minio-access-key>',
secret_key='<your-minio-secret-key>'
)
metadata = {'Pyminio-is': 'Awesome'}
pyminio_client.put_file(to_path='/foo/bar/baz', file_path='/mnt/some_file', metadata=metadata)
Its automaticly strips off the'x-amz-meta-' from the name of the variables so its more easy to use with pyminio_client.get('/foo/bar/baz')