Receiving 404 when trying to edit a datasource despite being able to retrieve it's details - grafana

I have a datasource created in Grafana and attempting to update it to refresh the bearer token for auth access.
However, I'm receiving a 404 Not Found error from the grafana api when making a request to localhost:3000/api/datasources/uid/:uid with a uid just received from the datasources/name api - attempting to update as per the documentation https://grafana.com/docs/grafana/latest/developers/http_api/data_source/#update-an-existing-data-source
I'm using the grafana opensource docker container with the Infinity plugin.
docker run -d -p 3000:3000 --name=grafana -e "GF_INSTALL_PLUGINS=yesoreyeram-infinity-datasource" grafana/grafana-oss
I'm able to create a datasource via the api, just can't update an existing one.
My code is:-
grafana_api_token = '<my api token>'
new_access_token = '<my new bearer token>'
my_data_source = 'my_data_source'
grafana_header = {"authorization": f"Bearer {grafana_api_token}", "content-type":"application/json;charset=UTF-8"}
grafana_datasource_url = f"http://localhost:3000/api/datasources/name/{my_data_source}"
firebolt_datasource_resp = get(url=grafana_datasource_url, headers=grafana_header)
full_datasource = loads(firebolt_datasource_resp.content.decode("utf-8"))
datasource_uid = full_datasource["uid"]
update_token_url = f"http://localhost:3000/api/datasources/uid/{datasource_uid}"
new_data = {"id": full_datasource["id"],
"uid": full_datasource["uid"],
"orgId": full_datasource["orgId"],
"name": "new_data_source",
"type": full_datasource["type"],
"access": full_datasource["access"],
"url": full_datasource["url"],
"user": full_datasource["user"],
"database": full_datasource["database"],
"basicAuth": full_datasource["basicAuth"],
"basicAuthUser": full_datasource["basicAuthUser"],
"withCredentials": full_datasource["withCredentials"],
"isDefault": full_datasource["isDefault"],
"jsonData": full_datasource["jsonData"],
"secureJsonData": {
"bearerToken": new_access_token
}
}
update_bearer_token_resp = post(url=update_token_url, data=dumps(new_data), headers=grafana_header)

Oh, oh, oh, idiot mode. Using post rather than put. Doh.

Related

OpenSearch 1.2 - Index pattern programmatically

I am trying to create an index_pattern for dashboard in Opensearch programmatically.
Due to the fact that I didn't found anything related to it on Opensearch documentation, I tried the elastic search saved_objects api:
POST /api/saved_objects/index-pattern/my_index_pattern
{
"attributes":{
"title": "my_index"
}
}
But when I call it I got the following error:
{
"error" : "no handler found for uri [/api/saved_objects/index-pattern/my_index_pattern?pretty=true] and method [POST]"
}
How can I solve it? Are there a different call for Opensearch to create an index_pattern?
curl -k -X POST http://<host>:5601/api/saved_objects/index-pattern/logs-* -H "osd-xsrf:true" -H "content-type:application/json" -d '
{
"attributes": {
"title": "logs-*",
"timeFieldName": "#timestamp"
}
}'
this works for me

Setting mirror maker 2 using kafka connect rest api put method not allowed

I am trying to do the setup for mirror maker 2 using my current connect cluster.
Based on this documentation it can be done via connect rest api.
https://cwiki.apache.org/confluence/display/KAFKA/KIP-382%3A+MirrorMaker+2.0#KIP-382:MirrorMaker2.0-RunningMirrorMakerinaConnectcluster
I followed the sample sending this PUT request :
PUT /connectors/us-west-source/config HTTP/1.1
{
"name": "us-west-source",
"connector.class": "org.apache.kafka.connect.mirror.MirrorSourceConnector",
"source.cluster.alias": "us-west",
"target.cluster.alias": "us-east",
"source.cluster.bootstrap.servers": "us-west-host1:9091",
"topics": ".*"
}
but i am getting a method not allowed response error response.
{
"error_code": 405,
"message": "HTTP 405 Method Not Allowed"
}
The api looks ok if I do a simple GET from the / , returning the version
{
"version": "2.1.0-cp1",
"commit": "bda8715f42a1a3db",
"kafka_cluster_id": "VBo-j1OAQZSN8tO4lMJ0Gg"
}
the PUT method doesnt work, using POST works as the api's documentation shows:
https://docs.confluent.io/current/connect/references/restapi.html#get--connectors
Remove the name of the connector from the url as #cricket_007 suggested, and wrap the config with new element like this:
curl --noproxy "*" -XPOST -H 'Content-Type: application/json' -H 'Accept: application/json' http://localhost:8083/connectors -d'{
"name": "dc-west-source",
"config": {
"connector.class": "org.apache.kafka.connect.mirror.MirrorSourceConnector",
"source.cluster.alias": "dc-west",
"target.cluster.alias": "dc-east",
"source.cluster.bootstrap.servers": "dc-west-cp-kafka-0.domain:32721,dc-west-cp-kafka-1.domain:32722,dc-west-cp-kafka-2.dc.domain:32723",
"topics": ".*"
}
}
' | jq .

get task id's from kafka connect API to print in logs

I have a kafka connect sink code for which below json is passed as curl command to register tasks.
Please let me know if anyone has any idea on how to get the task id's of my connect. For example in below example, we have defined max tasks is 3, so I need to know
the name of 3 tasks for logs i.e. I need to know which line of my log belongs to which task.
In below example, I know I have 3 tasks - TestCheck-1, TestCheck-2 and TestCheck-3 based on the kafka connect logs. I want to know how to get the task names so that I can print them in my kafka connect log lines.
{
"name": "TestCheck",
"config": {
"topics": "topic1",
"connector.class": "ApplicationSinkTask Class package",
"tasks.max": "3",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"value.converter": "org.apache.kafka.connect.storage.StringConverter",
"connector.url": "jdbc connection url",
"driver.name": "com.microsoft.sqlserver.jdbc.SQLServerDriver",
"username": "myusername",
"password": "mypassword",
"table.name": "test_table",
"database.name": "test",
}
}
When I register, I will get below details.
curl -X POST -H "Content-Type: application/json" --data #myjson.json http://service:8082/connectors
{"name":"TestCheck","config":{"topics":"topic1","connector.class":"ApplicationSinkTask Class package","tasks.max":"3","key.converter":"org.apache.kafka.connect.storage.StringConverter","value.converter":"org.apache.kafka.connect.storage.StringConverter","connector.url":"jdbc:sqlserver://datahubprod.database.windows.net:1433;","driver.name":"jdbc connection url","username":"myuser","password":"mypassword","table.name":"test_table","database.name":"test","name":"TestCheck"},"tasks":[{"connector":"TestCheck","task":0},{"connector":"TestCheck","task":1},{"connector":"TestCheck","task":2}],"type":null}
You can manage the connectors with the Kafka Connect Rest API. There's a whole heap of commands which you can find here
The example given in the above link shows you can retrieve all task for a given connector using the command
$ curl localhost:8083/connectors/local-file-sink/tasks
[
{
"id": {
"connector": "local-file-sink",
"task": 0
},
"config": {
"task.class": "org.apache.kafka.connect.file.FileStreamSinkTask",
"topics": "connect-test",
"file": "test.sink.txt"
}
}
]
You can use a language of your choice to send the curl command and import the json response into a variable/dictionary for further use, such as printing to a log. Here's a very simple example using python which will assign the whole output to a variable.
import requests
import json
connectors = 'http://localhost:8083/connectors'
p = requests.get(connectors)
data = p.json()
If you parse the data variable to a a dictionary, you can the access each element, i.e the task id
I hope this helps!

wso2am API manager 2.1 publisher change-lifecycle issue

I deployed API Manager 2.1.0 and setup the api-import-export-2.1.0 war file described here. After importing my API endpoint by uploading a zip file the status=CREATED.
To actually publish the API I am calling the Publisher's change-lifecycle API but I am getting this exception:
TID: [-1234] [] [2017-07-06 11:11:57,289] ERROR
{org.wso2.carbon.apimgt.rest.api.util.exception.GlobalThrowableMapper}
- An Unknown exception has been captured by global exception mapper.
{org.wso2.carbon.apimgt.rest.api.util.exception.GlobalThrowableMapper}
java.lang.NoSuchMethodError:
org.wso2.carbon.apimgt.api.APIProvider.changeLifeCycleStatus(Lorg/wso2/carbon/apimgt/api/model/APIIdentifier;Ljava/lang/String;)Z
Any ideas on why?
I can get an access token (scope apim:api_view) and call this
:9443/api/am/publisher/v0.10/apis
to list the api's just fine.
I get a different acces_token (for scope: apim:api_publish) and then call
:9443/api/am/publisher/v0.10/apis/change-lifecycle
but get the above Exception. Here's the example:
[root#localhost] ./publish.sh
View APIs (token dc0c1497-6c27-3a10-87d7-b2abc7190da5 scope: apim:api_view)
curl -k -s -H "Authorization: Bearer dc0c1497-6c27-3a10-87d7-b2abc7190da5" https://gw-node:9443/api/am/publisher/v0.10/apis
{
"count": 1,
"next": "",
"previous": "",
"list": [
{
"id": "d214f784-ee16-4067-9588-0898a948bb17",
"name": "Health",
"description": "health check",
"context": "/api",
"version": "v1",
"provider": "admin",
"status": "CREATED"
}
] }
Publish API (token b9a31369-8ea3-3bf2-ba3c-7f2a4883de7d scope: apim:api_publish)
curl -k -H "Authorization: Bearer b9a31369-8ea3-3bf2-ba3c-7f2a4883de7d" -X POST https://gw-node:9443/api/am/publisher/v0.10/apis/change-lifecycle?apiId=d214f784-ee16-4067-9588-0898a948bb17&action=Publish
{
"code":500,
"message":"Internal server error",
"description":"The server encountered an internal error. Please contact administrator.",
"moreInfo":"",
"error":[]
}
Issue resolved. In apim 2.1 the publisher & store API versions changed.
In apim 2.0 I was using:
:9443/api/am/publisher/v0.10/apis
:9443/api/am/store/v0.10/apis
but in apim 2.1 they are:
:9443/api/am/publisher/v0.11/apis
:9443/api/am/store/v0.11/apis

Lua socket HTTP getting connection refused

I'm trying to create a function that creates an issue to a Github repository using Lua socket.http, but I'm getting connection refused everytime. The documentation for socket is a bit unclear and I couldn't find more helpful information on why is the request getting refused every time. I tried the following:
local config = {
token = "oauth token from github"
}
local http, ltn12 = require("socket.http"), require("ltn12")
local payload = '{"title": "Test", "body": "Test body", "labels": ["bug"]}'
local response, status, headers, line = http.request("https://api.github.com/repos/<username>/<repository>/issues?access_token=" .. config.token, payload)
So I checked again how to do and there is a second form to do a request. I'm trying the following:
local response = {}
local _, status, headers, line = http.request{
url = "https://api.github.com/repos/<username>/<repository>/issues",
sink = ltn12.sink.table(response),
method = "POST",
headers = {
["Authorization"] = "token " .. config.token,
["Content-Length"] = payload:len()
},
source = ltn12.source.string(payload)
}
According to socket documentation, this should make POST request to the URL sending the payload as body. If I print(status) it prints connection refused.
I'm ignoring the first return value as it always is 1.
I tried manually issuing the request using curl:
curl -H "Authorization: token <oauth token from github>" https://api.github.com/repos/<username>/<repository>/issues -XPOST -d '{"title": "Test", "body": "{"title": "Test", "body": "Test body", "labels": ["bug"]}'
And it posted the issue properly. I still can't figure it out what is happening that the connection is getting refused.