Writing log to gcloud Vertex AI Endpoint using gcloud client fails with google.api_core.exceptions.MethodNotImplemented: 501 - gcloud

Trying to use google logging client library for writing logs into gcloud, specifically, i'm interested in writing logs that will be attached to a managed resource, in this case, a Vertex AI endpoint:
Code sample:
import logging
from google.api_core.client_options import ClientOptions
import google.cloud.logging_v2 as logging_v2
from google.oauth2 import service_account
def init_module_logger(module_name: str) -> logging.Logger:
module_logger = logging.getLogger(module_name)
module_logger.setLevel(settings.LOG_LEVEL)
credentials= service_account.Credentials.from_service_account_info(json.loads(SA_KEY_JSON))
client = logging_v2.client.Client(
credentials=credentials,
client_options=ClientOptions(api_endpoint="us-east1-aiplatform.googleapis.com"),
)
handler = client.get_default_handler(
resource=Resource(
type="aiplatform.googleapis.com/Endpoint",
labels={"endpoint_id": "ENDPOINT_NUMBER_ID",
"location": "us-east1"},
)
)
#Assume we have the formatter
handler.setFormatter(ENRICHED_FORMATTER)
module_logger.addHandler(handler)
return module_logger
logger = init_module_logger(__name__)
logger.info("This Fails with 501")
And i am getting:
google.api_core.exceptions.MethodNotImplemented: 501 The GRPC target
is not implemented on the server, host:
us-east1-aiplatform.googleapis.com, method:
/google.logging.v2.LoggingServiceV2/WriteLogEntries. Sent all pending
logs.
I thought we need to enable api and was told it's enabled, and that we have: https://www.googleapis.com/auth/logging.write
what could be causing the error?

As mentioned by #DazWilkin in the comment, the error is because the API endpoint us-east1-aiplatform.googleapis.com does not have a method called WriteLogEntries.
The above endpoint is used to send requests to Vertex AI services and not to Cloud Logging. The API endpoint to be used is the logging.googleapis.com as shown in the entries.write method. Refer to this documentation for more info.
The ClientOptions() function should have logging.googleapis.com as the api_endpoint parameter. If the client_options parameter is not specified, logging.googleapis.com is used by default.
After changing the api_endpoint parameter, I was able to successfully write the log entries. The ClientOptions() is as follows:
client = logging_v2.client.Client(
credentials=credentials,
client_options=ClientOptions(api_endpoint="logging.googleapis.com"),
)

Related

Flow testnet getting: An error occurred when interacting with the Access API

I'm trying to make a request to the testnet but getting the following error:
HTTP Request Error: An error occurred when interacting with the Access API.
transport=FetchTransport
error=Failed to fetch hostname=access.devnet.nodes.onflow.org:9000
path=/v1/scripts?block_height=sealed
method=POST
requestBody={
"script":"aW1wb3J0IEZ1bmdpYmxlVG9rZW4gZnJvbSAweDlhMDc2NmQ5M2I2NjA4YjcKaW1wb3J0IEZVU0QgZnJvbSAweGUyMjNkOGE2MjllNDljNjgKCnB1YiBmdW4gbWFpbihhZGRyZXNzOiBBZGRyZXNzKTogVUZpeDY0IHsKICAgIGxldCBhY2NvdW50ID0gZ2V0QWNjb3VudChhZGRyZXNzKQoKICAgIGxldCB2YXVsdFJlZiA9IGFjY291bnQuZ2V0Q2FwYWJpbGl0eSgvcHVibGljL2Z1c2RCYWxhbmNlKSEKICAgICAgICAuYm9ycm93PCZGVVNELlZhdWx0e0Z1bmdpYmxlVG9rZW4uQmFsYW5jZX0+KCkKICAgICAgICA/PyBwYW5pYygiQ291bGQgbm90IGJvcnJvdyBCYWxhbmNlIHJlZmVyZW5jZSB0byB0aGUgVmF1bHQiKQoKICAgIHJldHVybiB2YXVsdFJlZi5iYWxhbmNlCn0=",
"arguments":["eyJ0eXBlIjoiQWRkcmVzcyIsInZhbHVlIjoiMHg1ODA2MjJlNzQ1MTgzYjE2In0="]
}
The base64 decoded script is:
import FungibleToken from 0x9a0766d93b6608b7
import FUSD from 0xe223d8a629e49c68
pub fun main(address: Address): UFix64 {
let account = getAccount(address)
let vaultRef = account.getCapability(/public/fusdBalance)!
.borrow<&FUSD.Vault{FungibleToken.Balance}>()
?? panic("Could not borrow Balance reference to the Vault")
return vaultRef.balance
}
and the arguments are:
{"type":"Address","value":"0x580622e745183b16"}
When I run the following command in the cli, it works: flow scripts execute -n testnet scripts/getFUSDBalance.cdc --arg "Address:580622e745183b16"
not sure why I'm getting issues with this
EDIT:
didn't mention that the error is coming from FCL
Here's the code I'm using to interact with Flow:
async getFUSDBalance(): Promise<Result<number, string>> {
const scriptText = getFUSDBalance as string;
const user = await this.getCurrentUser();
return await flow.query<number>({
cadence: scriptText,
payer: fcl.authz,
authorizations: [fcl.authz],
args: (arg, t) => [
arg(user.addr, t.Address)
]
})
}
flow.query<T> is just a wrapper for fcl.query
It works in the CLI but not from FCL
EDIT 2:
So I found that the real issue I was facing was I was getting this error:
Fetch API cannot load access.devnet.nodes.onflow.org:9000/v1/scripts?block_height=sealed. URL scheme "access.devnet.nodes.onflow.org" is not supported
access.devnet.nodes.onflow.org:9000 is gRPC (I think). I should have been using https://rest-testnet.onflow.org. However, when I change to that I get a CORS violation. I think I read somewhere that you can't access the testnet from localhost (why?), but I deployed to a *.app domain and I'm getting the same error No 'Access-Control-Allow-Origin' header is present on the requested resource. Is the CORS policy not set up for *.app domains?
So turns out changing to https://rest-testnet.onflow.org for the accessNode.api value did work. Not sure why it didn't when I posted the second edit, but whatever. This works for localhost too.

How to create a bucket using the python SDK?

I'm trying to create a bucket in cloud object storage using python. I have followed the instructions in the API docs.
This is the code I'm using
COS_ENDPOINT = "https://control.cloud-object-storage.cloud.ibm.com/v2/endpoints"
# Create client
cos = ibm_boto3.client("s3",
ibm_api_key_id=COS_API_KEY_ID,
ibm_service_instance_id=COS_INSTANCE_CRN,
config=Config(signature_version="oauth"),
endpoint_url=COS_ENDPOINT
)
s3 = ibm_boto3.resource('s3')
def create_bucket(bucket_name):
print("Creating new bucket: {0}".format(bucket_name))
s3.Bucket(bucket_name).create()
return
bucket_name = 'test_bucket_442332'
create_bucket(bucket_name)
I'm getting this error - I tried setting CreateBucketConfiguration={"LocationConstraint":"us-south"}, but it doesnt seem to work
"ClientError: An error occurred (IllegalLocationConstraintException) when calling the CreateBucket operation: The unspecified location constraint is incompatible for the region specific endpoint this request was sent to."
Resolved by going to https://cloud.ibm.com/docs/cloud-object-storage?topic=cloud-object-storage-endpoints#endpoints
And choosing the endpoint specific to the region I need. The "Endpoint" provided with the credentials, is not the actual endpoint.

409 (request "Conflict") when creating second Endpoint connection in MongoDB Atlas using Terraform

I need to create many MongoDB Atlas endpoint connections using terraform.
I successfully create first, using this code:
#Private endpoint connection
resource "mongodbatlas_private_endpoint" "dbpe" {
project_id = var.prj_id
provider_name = "AWS"
region = var.aws_region
}
#AWS endpoint for secure connect to mongo db
resource "aws_vpc_endpoint" "ec2" {
vpc_id = var.sh_vpc
#service_name = "com.amazonaws.${var.aws_region}.ec2"
service_name = mongodbatlas_private_endpoint.dbpe.endpoint_service_name
vpc_endpoint_type = "Interface"
security_group_ids = [
aws_security_group.lb_sg.id,
]
subnet_ids = [
aws_subnet.subnet1.id,
var.sh_subnet
]
tags = {
"Name" = local.tname
}
#private_dns_enabled = true
}
But when I try to use this code second time in another folder (another tfstate) it failed cause error:
Error: error creating MongoDB Private Endpoints Connection: POST https://cloud.mongodb.com/api/atlas/v1.0/groups/***/privateEndpoint: 409 (request "Conflict") A PrivateLink Endpoint Service already exists for AWS region US_EAST_2.
As I understand, a second "mongodbatlas_private_endpoint" "dbpe" trying to create another one Endpoint service. But, when I creating second Endpoint manually through WebUI, it using the same service like first Endpoint.
How I can tell to second Endpoint to use the existing service?
Or maybe it all wrong?
Please, help!
Thank you!
I found the solution.
Creating the "Endpoint Connection" really creates Endpoint only when you do it at first time. All of next times is creating an only association between Atlas endpoint and new AWS Endpoint.
In terraform I tried to create an Atlas endpoint second time and catch an error (because of limit - 1 endpoint per region). All I need to do - is create "Basic Endpoint" one time (by separate folder with own tfstate) and don't delete it. And for each new AWS endpoint need to create a new link from AWS Endpoint to "Basic". I do it by a terraform resource:
mongodbatlas_private_endpoint_interface_link
Resource "mongodbatlas_private_endpoint" is not need now. A "service_name" parameter in "aws_vpc_endpoint" you can hardcoded from "Basic" Endpoint. Use "output" to see mongodbatlas_private_endpoint.test.private_link_id - this is what you need.

Authenticate with ECE ElasticSearch Sink from Apache Fink (Scala code)

Compiler error when using example provided in Flink documentation. The Flink documentation provides sample Scala code to set the REST client factory parameters when talking to Elasticsearch, https://ci.apache.org/projects/flink/flink-docs-stable/dev/connectors/elasticsearch.html.
When trying out this code i get a compiler error in IntelliJ which says "Cannot resolve symbol restClientBuilder".
I found the following SO which is EXACTLY my problem except that it is in Java and i am doing this in Scala.
Apache Flink (v1.6.0) authenticate Elasticsearch Sink (v6.4)
I tried copy pasting the solution code provided in the above SO into IntelliJ, the auto-converted code also has compiler errors.
// provide a RestClientFactory for custom configuration on the internally created REST client
// i only show the setMaxRetryTimeoutMillis for illustration purposes, the actual code will use HTTP cutom callback
esSinkBuilder.setRestClientFactory(
restClientBuilder -> {
restClientBuilder.setMaxRetryTimeoutMillis(10)
}
)
Then i tried (auto generated Java to Scala code by IntelliJ)
// provide a RestClientFactory for custom configuration on the internally created REST client// provide a RestClientFactory for custom configuration on the internally created REST client
import org.apache.http.auth.AuthScope
import org.apache.http.auth.UsernamePasswordCredentials
import org.apache.http.client.CredentialsProvider
import org.apache.http.impl.client.BasicCredentialsProvider
import org.apache.http.impl.nio.client.HttpAsyncClientBuilder
import org.elasticsearch.client.RestClientBuilder
// provide a RestClientFactory for custom configuration on the internally created REST client// provide a RestClientFactory for custom configuration on the internally created REST client
esSinkBuilder.setRestClientFactory((restClientBuilder) => {
def foo(restClientBuilder) = restClientBuilder.setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() {
override def customizeHttpClient(httpClientBuilder: HttpAsyncClientBuilder): HttpAsyncClientBuilder = { // elasticsearch username and password
val credentialsProvider = new BasicCredentialsProvider
credentialsProvider.setCredentials(AuthScope.ANY, new UsernamePasswordCredentials(es_user, es_password))
httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider)
}
})
foo(restClientBuilder)
})
The original code snippet produces the error "cannot resolve RestClientFactory" and then Java to Scala shows several other errors.
So basically i need to find a Scala version of the solution described in Apache Flink (v1.6.0) authenticate Elasticsearch Sink (v6.4)
Update 1: I was able to make some progress with some help from IntelliJ. The following code compiles and runs but there is another problem.
esSinkBuilder.setRestClientFactory(
new RestClientFactory {
override def configureRestClientBuilder(restClientBuilder: RestClientBuilder): Unit = {
restClientBuilder.setHttpClientConfigCallback(new RestClientBuilder.HttpClientConfigCallback() {
override def customizeHttpClient(httpClientBuilder: HttpAsyncClientBuilder): HttpAsyncClientBuilder = {
// elasticsearch username and password
val credentialsProvider = new BasicCredentialsProvider
credentialsProvider.setCredentials(AuthScope.ANY, new UsernamePasswordCredentials(es_user, es_password))
httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider)
httpClientBuilder.setSSLContext(trustfulSslContext)
}
})
}
}
The problem is that i am not sure if i should be doing a new of the RestClientFactory object. What happens is that the application connects to the elasticsearch cluster but then discovers that the SSL CERT is not valid, so i had to put the trustfullSslContext (as described here https://gist.github.com/iRevive/4a3c7cb96374da5da80d4538f3da17cb), this got me past the SSL issue but now the ES REST Client does a ping test and the ping fails, it throws an exception and the app shutsdown. I am suspecting that the ping fails because of the SSL error and maybe it is not using the trustfulSslContext i setup as part of new RestClientFactory and this makes me suspect that i should not have done the new, there should be a simple way to update the existing RestclientFactory object and basically this is all happening because of my lack of Scala knowledge.
Happy to report that this is resolved. The code i posted in Update 1 is correct. The ping to ECE was not working for two reasons:
The certificate needs to include the complete chain including the root CA, the intermediate CA and the cert for the ECE. This helped get rid of the whole trustfulSslContext stuff.
The ECE was sitting behind an ha-proxy and the proxy did the mapping for the hostname in the HTTP request to the actual deployment cluster name in ECE. this mapping logic did not take into account that the Java REST High Level client uses the org.apache.httphost class which creates the hostname as hostname:port_number even when the port number is 443. Since it did not find the mapping because of the 443 therefore the ECE returned a 404 error instead of 200 ok (only way to find this was to look at unencrypted packets at the ha-proxy). Once the mapping logic in ha-proxy was fixed, the mapping was found and the pings are now successfull.

ning.http.client Kerberos Example

How can I support Kerberos based authentication with the ning http client?
I am extending existing code which has support for NTLMAuth and I want to be able to include support for Kerberos, which is used on some of the websites that I need to test.
I want to be able to put in the user and password programmatically, I do not want to use a keyTab, or setup krb5 configuration on the system where this is running.
I have the following code block;
import com.ning.http.client.RequestBuilder;
import com.ning.http.client.Realm.RealmBuilder;
....
RealmBuilder myRealmBuilder = new RealmBuilder()
.setScheme(AuthScheme.KERBEROS)
.setUsePreemptiveAuth(true)
.setNtlmDomain(getDomain())
.setNtlmHost(getHost())
.setPrincipal(getUsername())
.setPassword(getUserPassword()));
RequestBuilder rb = new RequestBuilder()
.setMethod(site.getMethod())
.setUrl(site.getUrl())
.setFollowRedirects(site.isFollowRedirects())
.setRealm(myRealmBuilder),
site)
ยดยดยด
Currently I get the error response:
FAILED: Invalid name provided (Mechanism level: KrbException: Cannot locate default realm)
Does anyone have an good example of how to do this correctly?