wso2am API manager 2.1 publisher change-lifecycle issue - rest

I deployed API Manager 2.1.0 and setup the api-import-export-2.1.0 war file described here. After importing my API endpoint by uploading a zip file the status=CREATED.
To actually publish the API I am calling the Publisher's change-lifecycle API but I am getting this exception:
TID: [-1234] [] [2017-07-06 11:11:57,289] ERROR
{org.wso2.carbon.apimgt.rest.api.util.exception.GlobalThrowableMapper}
- An Unknown exception has been captured by global exception mapper.
{org.wso2.carbon.apimgt.rest.api.util.exception.GlobalThrowableMapper}
java.lang.NoSuchMethodError:
org.wso2.carbon.apimgt.api.APIProvider.changeLifeCycleStatus(Lorg/wso2/carbon/apimgt/api/model/APIIdentifier;Ljava/lang/String;)Z
Any ideas on why?
I can get an access token (scope apim:api_view) and call this
:9443/api/am/publisher/v0.10/apis
to list the api's just fine.
I get a different acces_token (for scope: apim:api_publish) and then call
:9443/api/am/publisher/v0.10/apis/change-lifecycle
but get the above Exception. Here's the example:
[root#localhost] ./publish.sh
View APIs (token dc0c1497-6c27-3a10-87d7-b2abc7190da5 scope: apim:api_view)
curl -k -s -H "Authorization: Bearer dc0c1497-6c27-3a10-87d7-b2abc7190da5" https://gw-node:9443/api/am/publisher/v0.10/apis
{
"count": 1,
"next": "",
"previous": "",
"list": [
{
"id": "d214f784-ee16-4067-9588-0898a948bb17",
"name": "Health",
"description": "health check",
"context": "/api",
"version": "v1",
"provider": "admin",
"status": "CREATED"
}
] }
Publish API (token b9a31369-8ea3-3bf2-ba3c-7f2a4883de7d scope: apim:api_publish)
curl -k -H "Authorization: Bearer b9a31369-8ea3-3bf2-ba3c-7f2a4883de7d" -X POST https://gw-node:9443/api/am/publisher/v0.10/apis/change-lifecycle?apiId=d214f784-ee16-4067-9588-0898a948bb17&action=Publish
{
"code":500,
"message":"Internal server error",
"description":"The server encountered an internal error. Please contact administrator.",
"moreInfo":"",
"error":[]
}

Issue resolved. In apim 2.1 the publisher & store API versions changed.
In apim 2.0 I was using:
:9443/api/am/publisher/v0.10/apis
:9443/api/am/store/v0.10/apis
but in apim 2.1 they are:
:9443/api/am/publisher/v0.11/apis
:9443/api/am/store/v0.11/apis

Related

Receiving 404 when trying to edit a datasource despite being able to retrieve it's details

I have a datasource created in Grafana and attempting to update it to refresh the bearer token for auth access.
However, I'm receiving a 404 Not Found error from the grafana api when making a request to localhost:3000/api/datasources/uid/:uid with a uid just received from the datasources/name api - attempting to update as per the documentation https://grafana.com/docs/grafana/latest/developers/http_api/data_source/#update-an-existing-data-source
I'm using the grafana opensource docker container with the Infinity plugin.
docker run -d -p 3000:3000 --name=grafana -e "GF_INSTALL_PLUGINS=yesoreyeram-infinity-datasource" grafana/grafana-oss
I'm able to create a datasource via the api, just can't update an existing one.
My code is:-
grafana_api_token = '<my api token>'
new_access_token = '<my new bearer token>'
my_data_source = 'my_data_source'
grafana_header = {"authorization": f"Bearer {grafana_api_token}", "content-type":"application/json;charset=UTF-8"}
grafana_datasource_url = f"http://localhost:3000/api/datasources/name/{my_data_source}"
firebolt_datasource_resp = get(url=grafana_datasource_url, headers=grafana_header)
full_datasource = loads(firebolt_datasource_resp.content.decode("utf-8"))
datasource_uid = full_datasource["uid"]
update_token_url = f"http://localhost:3000/api/datasources/uid/{datasource_uid}"
new_data = {"id": full_datasource["id"],
"uid": full_datasource["uid"],
"orgId": full_datasource["orgId"],
"name": "new_data_source",
"type": full_datasource["type"],
"access": full_datasource["access"],
"url": full_datasource["url"],
"user": full_datasource["user"],
"database": full_datasource["database"],
"basicAuth": full_datasource["basicAuth"],
"basicAuthUser": full_datasource["basicAuthUser"],
"withCredentials": full_datasource["withCredentials"],
"isDefault": full_datasource["isDefault"],
"jsonData": full_datasource["jsonData"],
"secureJsonData": {
"bearerToken": new_access_token
}
}
update_bearer_token_resp = post(url=update_token_url, data=dumps(new_data), headers=grafana_header)
Oh, oh, oh, idiot mode. Using post rather than put. Doh.

Kafka REST API source connector with authentication header

I need to create kafka source connector for REST API with Header authentication like
curl -H "Authorization: Basic " -H "clientID: " "https:< url for source> " .
I am using apache kafka , I used connector class com.github.castorm.kafka.connect.http.HttpSourceConnector
Here is my json file for connector
{
"name": "rest_data6",
"config": {
"key.converter":"org.apache.kafka.connect.json.JsonConverter",
"value.converter":"org.apache.kafka.connect.json.JsonConverter",
"key.converter.schemas.enable":"true",
"value.converter.schemas.enable":"true",
"connector.class": "com.github.castorm.kafka.connect.http.HttpSourceConnector",
"tasks.max": "1",
"http.request.headers": "Authorization: Basic <key1>",
"http.request.headers": "clientID: <key>",
"http.request.url": "https:<url for source ?",
"kafka.topic": "mysqltopic2"
}
}
Also I tried with "connector.class": "com.tm.kafka.connect.rest.RestSourceConnector", My joson file as below
"name": "rest_data2",
"config": {
"key.converter":"org.apache.kafka.connect.json.JsonConverter",
"value.converter":"org.apache.kafka.connect.json.JsonConverter",
"key.converter.schemas.enable":"true",
"value.converter.schemas.enable":"true",
"connector.class": "com.tm.kafka.connect.rest.RestSourceConnector",
"rest.source.poll.interval.ms": "900",
"rest.source.method": "GET",
"rest.source.url":"URL of source ",
"tasks.max": "1",
"rest.source.headers": "Authorization: Basic <key> , clientId :<key2>",
"rest.source.topic.selector": "com.tm.kafka.connect.rest.selector.SimpleTopicSelector",
"rest.source.destination.topics": "mysql1"
}
}
But no hope . Any idea how to GET REST API data with authentication . My authentication parameter is
Authorization: Basic and Authorization: Basic .
Just for mention both the file are working with REST API without authentication , once I added authentication parameter then wither connector status is failed or It produce ":"Cannot route. Codebase/company is invalid"" message in topic.
Can any one suggest what is way to solve it
I mailed to original developer to Cástor Rodríguez. As per his solution I modified my json
Put header into a single form and it works
{
"name": "rest_data6",
"config": {
"key.converter":"org.apache.kafka.connect.json.JsonConverter",
"value.converter":"org.apache.kafka.connect.json.JsonConverter",
"key.converter.schemas.enable":"true",
"value.converter.schemas.enable":"true",
"connector.class": "com.github.castorm.kafka.connect.http.HttpSourceConnector",
"tasks.max": "1",
"http.request.headers": "Authorization: Basic <key1>, clientID: <key>"
"http.request.url": "https:<url for source ?",
"kafka.topic": "mysqltopic2"
}
}

Setting mirror maker 2 using kafka connect rest api put method not allowed

I am trying to do the setup for mirror maker 2 using my current connect cluster.
Based on this documentation it can be done via connect rest api.
https://cwiki.apache.org/confluence/display/KAFKA/KIP-382%3A+MirrorMaker+2.0#KIP-382:MirrorMaker2.0-RunningMirrorMakerinaConnectcluster
I followed the sample sending this PUT request :
PUT /connectors/us-west-source/config HTTP/1.1
{
"name": "us-west-source",
"connector.class": "org.apache.kafka.connect.mirror.MirrorSourceConnector",
"source.cluster.alias": "us-west",
"target.cluster.alias": "us-east",
"source.cluster.bootstrap.servers": "us-west-host1:9091",
"topics": ".*"
}
but i am getting a method not allowed response error response.
{
"error_code": 405,
"message": "HTTP 405 Method Not Allowed"
}
The api looks ok if I do a simple GET from the / , returning the version
{
"version": "2.1.0-cp1",
"commit": "bda8715f42a1a3db",
"kafka_cluster_id": "VBo-j1OAQZSN8tO4lMJ0Gg"
}
the PUT method doesnt work, using POST works as the api's documentation shows:
https://docs.confluent.io/current/connect/references/restapi.html#get--connectors
Remove the name of the connector from the url as #cricket_007 suggested, and wrap the config with new element like this:
curl --noproxy "*" -XPOST -H 'Content-Type: application/json' -H 'Accept: application/json' http://localhost:8083/connectors -d'{
"name": "dc-west-source",
"config": {
"connector.class": "org.apache.kafka.connect.mirror.MirrorSourceConnector",
"source.cluster.alias": "dc-west",
"target.cluster.alias": "dc-east",
"source.cluster.bootstrap.servers": "dc-west-cp-kafka-0.domain:32721,dc-west-cp-kafka-1.domain:32722,dc-west-cp-kafka-2.dc.domain:32723",
"topics": ".*"
}
}
' | jq .

get task id's from kafka connect API to print in logs

I have a kafka connect sink code for which below json is passed as curl command to register tasks.
Please let me know if anyone has any idea on how to get the task id's of my connect. For example in below example, we have defined max tasks is 3, so I need to know
the name of 3 tasks for logs i.e. I need to know which line of my log belongs to which task.
In below example, I know I have 3 tasks - TestCheck-1, TestCheck-2 and TestCheck-3 based on the kafka connect logs. I want to know how to get the task names so that I can print them in my kafka connect log lines.
{
"name": "TestCheck",
"config": {
"topics": "topic1",
"connector.class": "ApplicationSinkTask Class package",
"tasks.max": "3",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"value.converter": "org.apache.kafka.connect.storage.StringConverter",
"connector.url": "jdbc connection url",
"driver.name": "com.microsoft.sqlserver.jdbc.SQLServerDriver",
"username": "myusername",
"password": "mypassword",
"table.name": "test_table",
"database.name": "test",
}
}
When I register, I will get below details.
curl -X POST -H "Content-Type: application/json" --data #myjson.json http://service:8082/connectors
{"name":"TestCheck","config":{"topics":"topic1","connector.class":"ApplicationSinkTask Class package","tasks.max":"3","key.converter":"org.apache.kafka.connect.storage.StringConverter","value.converter":"org.apache.kafka.connect.storage.StringConverter","connector.url":"jdbc:sqlserver://datahubprod.database.windows.net:1433;","driver.name":"jdbc connection url","username":"myuser","password":"mypassword","table.name":"test_table","database.name":"test","name":"TestCheck"},"tasks":[{"connector":"TestCheck","task":0},{"connector":"TestCheck","task":1},{"connector":"TestCheck","task":2}],"type":null}
You can manage the connectors with the Kafka Connect Rest API. There's a whole heap of commands which you can find here
The example given in the above link shows you can retrieve all task for a given connector using the command
$ curl localhost:8083/connectors/local-file-sink/tasks
[
{
"id": {
"connector": "local-file-sink",
"task": 0
},
"config": {
"task.class": "org.apache.kafka.connect.file.FileStreamSinkTask",
"topics": "connect-test",
"file": "test.sink.txt"
}
}
]
You can use a language of your choice to send the curl command and import the json response into a variable/dictionary for further use, such as printing to a log. Here's a very simple example using python which will assign the whole output to a variable.
import requests
import json
connectors = 'http://localhost:8083/connectors'
p = requests.get(connectors)
data = p.json()
If you parse the data variable to a a dictionary, you can the access each element, i.e the task id
I hope this helps!

How to enable Javascript in Druid

I have been using Druid for the past week and wanted to enable javascript for some postAggregations.
I think I followed the outlined steps and updated the common.runtime.properties file in ../con f/druid/_common/ to include druid.javascript.enabled=true. I then stopped the current processes and re-ran the Quickstart procedures, but it still says that JavaScript is disabled:
{
"error" : "Unknown exception",
"errorMessage" : "Instantiation of [simple type, class io.druid.query.aggregation.post.JavaScriptPostAggregator] value failed: JavaScript is disabled. (through reference chain: java.util.ArrayList[0])",
"errorClass" : "com.fasterxml.jackson.databind.JsonMappingException",
"host" : null
}
I am currently running it in the 'Quickstart' configuration - single local machine. Any pointers? Thanks!
JavaScript Query For druid Aggregation. Save the file as .body and hit the curl request.
This is a sample query for Average value.
curl -X POST "http://localhost:8082/druid/v2/?pretty" \ -H
'content-type: application/json' -d #query.body
{
"queryType":"groupBy",
"dataSource":"whirldata",
"granularity":"all",
"dimensions":[],
"aggregations":[{"name":"rows","type":"count","fieldName":"rows"},
{"name":"TargetDOS","type":"doubleSum","fieldName":"Target DOS"}],"postAggregations":[
{
"type": "javascript",
"name": "Target DOS Average",
"fieldNames": ["TargetDOS", "rows"],
"function": "function(TargetDOS, rows) { return Math.abs(TargetDOS) / rows; }"
}], "intervals":[ "2006-01-01T00:00:00.000Z/2020-01-01T00:00:00.000Z" ]}
The part you are missing is likely that the quickstart reads configs from conf-quickstart rather than conf. So try editing conf-quickstart/druid/_common/common.runtime.properties.