unable to find the current configuration of OSSFS/S3FS? - s3fs

I have connected the Alibaba Cloud OSS bucket using OSSFS. But now I want to change the OSS Bucket to another URL. But I want to check which URL it is configured now?
I have checked the documentation and also the OSSFS --help But there is no basic info command available.
Thanks

You need to configure and set the custom domain name to the public endpoint of the bucket.
Refer the following link:
https://www.alibabacloud.com/help/doc-detail/31902.htm

Related

Where is the the config file for settings in Grafana

I am a free Grafana cloud plan user.
I want to enable anonymous access to my dashboard.
I searched many places and found they all talked about changing the config file in code.
I cannot find where the config file is and it seems everyone talks about the config files and knows where it is located. Also found this official document talks about how to enable the anonymous access.
To me, it feels like I need to log into a console on the Grafana server, but I cannot find it.
How can I change the config file to enable anonymous access to my dashboard? Does it require a paid plan?
You don't have access to config file as free Grafana cloud user. Deploy own Grafana instance and then you will be able to customize config file.
For anyone coming here from a search, my grafana.ini was in
/etc/grafana/grafana.ini
More info on the config file location here.

Is there a way to use an http proxy for google-cloud-cpp?

I am using google-cloud-cpp (C++ API for Google Cloud Platform functions) to create/read/write to buckets. When I am working from within the organization's firewall, I have to use a proxy to be able to connect to google cloud.
I see that we can configure a proxy using the gcloud command line:
gcloud config set proxy/type http
gcloud config set proxy/address x.x.x.x
gcloud config set proxy/port
Can I do something similar when I use google-cloud-cpp?
If we look at the source code of the google-cloud-cpp library as found on GitHub, we seem to see that it is based on libcurl.
See:
https://github.com/googleapis/google-cloud-cpp/blob/master/google/cloud/storage/internal/curl_handle.cc
Following on from the comments by #Travis Webb, we then look at the docs for libcurl and find:
https://curl.haxx.se/libcurl/c/CURLOPT_PROXY.html
This documents API that can be used to set proxy settings for programs that use libcurl. However, if we read deeper, we find a section on environment variables that declares that http_proxy and https_proxy can be set.

Use only a domain and disable https://storage.googleapis.com url access

I am newbie at cloud servers and I've opened a google cloud storage to host image files. I've verified my domain and configured it, to view images via my domain. The problem is, same file is both accessible via my domain example.com/images/tiny.png and also via storage.googleapis.com/example.com/images/tiny.png Is there any solution to disable access via storage.googleapis.com and use only my domain?
Google Cloud Platform Support Version:
NOTE: This is the reply from Google Cloud Platform Support when contacted via email...
I understand that you have set up a domain name for one of your Cloud Storage buckets and you want to make sure only URLs starting with your domain name have access to this bucket.
I am afraid that this is not possible because of how Cloud Storage permission works.
Making a Cloud Storage bucket publicly readable also gives each of its files a public link. And currently this public link can’t be disabled.
A workaround would be implement a proxy program and running it on a Compute Engine virtual machine. This VM will need a static external IP so that you can map your domain to it. The proxy program will be in charged of returning the requested file from a predefined Cloud Storage bucket while the bucket keeps to be inaccessible to the public.
You may find these documents helpful if you are interested in this workaround:
1. Quick start to set up a Linux VM (1).
2. Python API for accessing Cloud Storage files (2).
3. How to download service account keys to grant a program access to a set of services (3).
4. Pricing calculator for getting a picture on how much a VM may cost (4).
(1) https://cloud.google.com/compute/docs/quickstart-linux
(2) https://pypi.org/project/google-cloud-storage/
(3) https://cloud.google.com/iam/docs/creating-managing-service-account-keys
(4) https://cloud.google.com/products/calculator/
My Version:
It seems the solution to this question is really a simple, just FUSE Google Cloud Storage with VM Instance.
After FUSE private files from GCS can be accessed through VM's IP address. It made Google Cloud Storage Bucket act like a directory.
The detailed documentation about how to setup FUSE in Google Cloud is here.
There is but it requires you to do more work.
Your current solution works because you've made access to the GCS bucket (example.com), public and then you're DNS aliasing from your domain.
An alternative approach would be for you to limit access to the GCS bucket to one (possibly several) accounts and then run a web-server that uses one of the accounts to access your image files. You could then also either permit access to your web-server to anyone or also limit access to it.
More work for you (and possibly cost) but more control.

How to programmatically retrieve the name node hostname?

The IBM Analytics Engine docs have the following instructions for getting the name node hostname:
Go to Manage Cluster in IBM® Cloud and click the nodes tab to get the name node host name. It's the host name of the management-slave1 node type.
How can I programmatically retrieve the name node host name? Can I retrieve it via an API, or maybe I can get it by running a command over ssh. Failing that, can I derive it from one of the host names on vcap services?
Maybe this information should be provided to users in the vcap info?
In the end, I solved this using ambari. The solution that worked for me is captured here: https://stackoverflow.com/a/47844056/1033422

Programmatically download RDP file of Azure Resource Manager VM

I am able to create VM from a custom image using Azure resource management sdk for .net. Now, I want to download the RDP file for virtual machine programmatically. I have searched and able to find Rest API for azure 'Classic' deployments which contains an api call to download RDP file but i can't find the same in Rest API for 'ARM' deployment. Also, I can't find any such Method in .net sdk for azure.
Does there any way exist to achieve that? Please guide..
I don't know of a way to get the RDP file, but you can get all the information you need from the deployment itself. On the deployment, you can set outputs for the values you need like the publicIp dns. See this:
https://github.com/bmoore-msft/AzureRM-Samples/blob/master/VMCSEInstallFilePS/azuredeploy.json#L213-215
If your environment is more complex (load balancers, network security groups) you need to account for port numbers, etc.