linkerd cli returns "invalid argument" when running "top" - kubernetes

I'm going through the getting-started tutorial for linkerd and I've got stable-2.1.0 installed on kube v1.9.6 and v1.12.3
I've validated that all the pods are running and the mesh is working via the dashboard.
When I try to run linkerd -n linkerd top deploy/linkerd-web in step 4, I get invalid argument back from the controller.
Here's the verbose output:
DEBU[0000] Expecting API to be served over [https://xx.xx.xx.xx:6443/api/v1/namespaces/linkerd/services/linkerd-controller-api:http/proxy/api/v1/]
DEBU[0000] Making gRPC-over-HTTP call to [https://xx.xx.xx.xx:6443/api/v1/namespaces/linkerd/services/linkerd-controller-api:http/proxy/api/v1/SelfCheck] []
DEBU[0000] Response from [https://xx.xx.xx.xx:6443/api/v1/namespaces/linkerd/services/linkerd-controller-api:http/proxy/api/v1/SelfCheck] had headers: map[Content-Type:[application/octet-stream] Date:[Wed, 12 Dec 2018 05:54:06 GMT] Content-Length:[108]]
DEBU[0000] gRPC-over-HTTP call returned status [200 OK] and content length [108]
DEBU[0003] Response from [https://xx.xx.xx.xx:6443/api/v1/namespaces/linkerd/services/linkerd-controller-api:http/proxy/api/v1/TapByResource] had headers: map[Content-Type:[application/octet-stream] Date:[Wed, 12 Dec 2018 05:54:09 GMT]]
Error: invalid argument
Any advice on what I should try next?
I also created this issue on GitHub

Turns out there are some dependencies (termbox) for drawing the top table that aren't supported on Windows Subsystem for Linux. Here's the issue on GitHub: https://github.com/linkerd/linkerd2/issues/1976

Related

ComponentStatus is deprecated - what to use then

What do you use instead og kubectl get ComponentStatus?
kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
Yes, this API is deprecated and as it provided status of etcd, kube-scheduler, and kube-controller-manager components, which we can get through kubectl or using lives endpoint.
so you can try
kubectl get --raw='/readyz?verbose'
#local cluster
curl -k https://localhost:6443/livez?verbose
output
[+]ping ok
[+]log ok
[+]etcd ok
[+]informer-sync ok
[+]poststarthook/start-kube-apiserver-admission-initializer ok
[+]poststarthook/generic-apiserver-start-informers ok
[+]poststarthook/priority-and-fairness-config-consumer ok
[+]poststarthook/priority-and-fairness-filter ok
[+]poststarthook/start-apiextensions-informers ok
[+]poststarthook/start-apiextensions-controllers ok
[+]poststarthook/crd-informer-synced ok
[+]poststarthook/bootstrap-controller ok
[+]poststarthook/rbac/bootstrap-roles ok
[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
[+]poststarthook/priority-and-fairness-config-producer ok
[+]poststarthook/start-cluster-authentication-info-controller ok
[+]poststarthook/aggregator-reload-proxy-client-cert ok
[+]poststarthook/start-kube-aggregator-informers ok
[+]poststarthook/apiservice-registration-controller ok
[+]poststarthook/apiservice-status-available-controller ok
[+]poststarthook/kube-apiserver-autoregistration ok
[+]autoregister-completion ok
[+]poststarthook/apiservice-openapi-controller ok
[+]shutdown ok
readyz check passed
The current state of this API is problematic, and requires reversing the actual data flow (it requires the API server to call to its clients), and is not functional across deployment topologies.
It should be clearly marked as deprecated.
Mark componentstatus as deprecated
The Kubernetes API server provides 3 API endpoints (healthz, livez and readyz) to indicate the current status of the API server. The healthz endpoint is deprecated (since Kubernetes v1.16), and you should use the more specific livez and readyz endpoints instead.
using-api-health-checks

how to get list of pod names using kubernetes rest api (jsonpath)

is jsonPath supported in kubernetes http api ?
for ex; how the following translates to in http API ?
kubectl get pods -o=jsonpath='{.items[0]}'
It's not supported by the API, you would need to evaluate that jsonpath against the API response.
You can use verbose flag -v6 and above to see what API calls are actually being made
kubectl get pods -o=jsonpath='{.items[0]}' -v6 2>&1
Output:
I0805 11:16:51.632841 76333 loader.go:375] Config loaded from file: /Users/loganath/firetap/config-ctl1
I0805 11:16:53.666539 76333 round_trippers.go:444] GET https://10.x.x.x:6443/api/v1/namespaces/web/pods?limit=500 200 OK in 2021 milliseconds
I0805 11:16:54.901557 76333 table_printer.go:45] Unable to decode server response into a Table. Falling back to hardcoded types: attempt to decode non-Table object

API Gateway Mock Integration Fails with 500

I have an API Gateway integration for a method/resource which works when I call it from the API but not when I actually call it:
$ aws apigateway test-invoke-method --rest-api-id $REST_API_ID \
--resource-id $RESOURCE_ID --http-method GET | jq -r .log,.body
This works out fine and I get the following output:
Tue May 16 17:46:42 UTC 2017 : Starting execution for request: test-invoke-request
Tue May 16 17:46:42 UTC 2017 : HTTP Method: GET, Resource Path: /status.json
Tue May 16 17:46:42 UTC 2017 : Method request path: {}
Tue May 16 17:46:42 UTC 2017 : Method request query string: {}
Tue May 16 17:46:42 UTC 2017 : Method request headers: {}
Tue May 16 17:46:42 UTC 2017 : Method request body before transformations:
Tue May 16 17:46:42 UTC 2017 : Endpoint response body before transformations:
Tue May 16 17:46:42 UTC 2017 : Endpoint response headers: {}
Tue May 16 17:46:42 UTC 2017 : Method response body after transformations: { "statusCode": 200 }
Tue May 16 17:46:42 UTC 2017 : Method response headers: {Content-Type=application/json}
Tue May 16 17:46:42 UTC 2017 : Successfully completed execution
Tue May 16 17:46:42 UTC 2017 : Method completed with status: 200
{ "statusCode": 200 }
However, I cannot access this at my URL, which is api.naftuli.wtf/v1/status.json. I have stages defined at glhf, stable, and v1, so by replacing that, you will see different responses. I just simply want a dummy response that returns a 200 JSON blob.
My Terraform for the resources is here as a Gist. Hopefully this fully shows my API Gateway configuration.
If I test invoke this from the CLI or from the web console, I get back what is expected. However, if I curl this from my deployed API at api.naftuli.wtf, I don't get anything nice:
$ for stage in glhf stable v1 ; do
> url="https://api.naftuli.wtf/${stage}/status.json"
> echo "${url}:"
> curl -i -H 'Content-Type: application/json' \
> https://api.naftuli.wtf/${stage}/status.json
> echo -e '\n
> done
https://api.naftuli.wtf/glhf/status.json:
HTTP/1.1 500 Internal Server Error
Content-Type: application/json
Content-Length: 36
Connection: keep-alive
Date: Tue, 16 May 2017 21:41:38 GMT
x-amzn-RequestId: 712ba52b-3a80-11e7-9fec-b79b62d3bf7f
X-Cache: Error from cloudfront
Via: 1.1 da7a5d0ed7f424609000879e43743066.cloudfront.net (CloudFront)
X-Amz-Cf-Id: hBwlbPCP9n2rlz53I-Qb9KoffHB_FoxUCZUaJYNnU3XhCWuMpQTP1Q==
{"message": "Internal server error"}
https://api.naftuli.wtf/stable/status.json:
HTTP/1.1 403 Forbidden
Content-Type: application/json
Content-Length: 23
Connection: keep-alive
Date: Tue, 16 May 2017 21:41:38 GMT
x-amzn-RequestId: 71561066-3a80-11e7-9b00-6700be628328
x-amzn-ErrorType: ForbiddenException
X-Cache: Error from cloudfront
Via: 1.1 0c146399837c7d36c1f0f9d2636f8cf8.cloudfront.net (CloudFront)
X-Amz-Cf-Id: ITX765xD8s4sNuOdXaJ2kPvqPo-w_dsQK3Sq_No130FAHxFuoVhO8w==
{"message":"Forbidden"}
https://api.naftuli.wtf/v1/status.json:
HTTP/1.1 500 Internal Server Error
Content-Type: application/json
Content-Length: 36
Connection: keep-alive
Date: Tue, 16 May 2017 21:41:39 GMT
x-amzn-RequestId: 7185fa99-3a80-11e7-a3b1-2f9e659fc361
X-Cache: Error from cloudfront
Via: 1.1 586f1a150b4ba39f3a668b8055d4d5ea.cloudfront.net (CloudFront)
X-Amz-Cf-Id: dvnOa1s-YlwLSNzBfVyx5tSL6XrjFJM4_fES7MyTofykB3ReU5R1fg==
{"message": "Internal server error"}
My understanding of stages were that they were additional path prefixes to the base path under which all API resources were available. If I had a stage called v1 with a path of /v1, I'd expect that an API Gateway resource for status.json will be basically mapped under /v1, yielding /v1/status.json.
I may be misunderstanding how API Gateway base path mappings and stages work, but CloudWatch tells me that the call is at least happening, though failing for some obscure reason:
21:41:39(c5be3842-6af4-4725-a34f-d6eea8042d17) Verifying Usage Plan for request: c5be3842-6af4-4725-a34f-d6eea8042d17. API Key: API Stage: tcips69qx2/prod_v1
21:41:39(c5be3842-6af4-4725-a34f-d6eea8042d17) API Key authorized because method 'GET /status.json' does not require API Key. Request will not contribute to throttle or quota limits
21:41:39(c5be3842-6af4-4725-a34f-d6eea8042d17) Usage Plan check succeeded for API Key and API Stage tcips69qx2/prod_v1
21:41:39(c5be3842-6af4-4725-a34f-d6eea8042d17) Starting execution for request: c5be3842-6af4-4725-a34f-d6eea8042d17
21:41:39(c5be3842-6af4-4725-a34f-d6eea8042d17) HTTP Method: GET, Resource Path: /v1/status.json
21:41:39(c5be3842-6af4-4725-a34f-d6eea8042d17) Execution failed due to configuration error: statusCode should be an integer which defined in request template
21:41:39(c5be3842-6af4-4725-a34f-d6eea8042d17) Method completed with status: 500
Apparently only traffic across the V1 stage is getting through to CloudWatch logs. I have a misconfiguration somewhere and I can't seem to find it.
Can you try and change you request template in the integration request setup to this:
{
"statusCode": 200
}
API Gateway looks for the status code to return in the response in your integration request template. The response is generated by the mapping template in the integration response. I can see from your terraform setup that you are loading the output json file in the integration request template. This is content API Gateway does not expect.
With Mock Integration Amazon API Gateway there are 2 common reasons for 500 Internal Server error.
Check the mapping template in Integration Request and ensure that you are passing statusCode as an integer to the MOCK Integration endpoint.
{
"statusCode": <Integer_Status_code>
}
Note: Make sure that status code is passed as integer not string.
Correct : 200 Incorrect : "200"
Mock Integrations do not support the binary content. If the API is enabled with bixnary support and has application/json or */* set as binaryMediaTypes, MOCK Integration endpoints would throw a 500 Internal server error when trying to transform the content.
A workaround is to update the contentHandling property of the MOCK Integration to CONVERT_TO_TEXT
Read more here :- https://cloudnamaste.com/500-internal-server-error-mock-integration/
For my case, when I deploy the lambda with serverless framework, the OPTIONS returns 200 when called. However, when I configure manually on AWS API Gateway, it returns 500 Internal Server Error.
When I check the execution log of API gateway, it said
(ee0a42d9-2cfc-4788-8679-00fbd7938cf1) Method request body before transformations: [Binary Data]
(ee0a42d9-2cfc-4788-8679-00fbd7938cf1) Execution failed due to configuration error: Unable to transform request
(ee0a42d9-2cfc-4788-8679-00fbd7938cf1) Gateway response body:
{
"message": "Internal server error"
}
After you create the Resources and OPTIONS methods, select the OPTIONS method then
At Method Execution --> Integration Request --> expand the Mapping Templates, choose "When no template matches the request Content-Type header". Add or select "application/json" in the Content-Type. Click the "application/json", and in the Generate template, paste
{statusCode: 200} without any double quote. Note, if the {"statusCode": 200} exists but with double quote, remove it to be the same as above. Then Save it
At Method Execution --> Integration Response --> expand the 200 response status --> Mapping Template --> Add or select "application/json" in the Content-Type. Make sure that the Generate template box is empty. Then Save it
At Method Execution --> Method Response --> expand the 200 response. In Response Headers for 200, add three headers: Access-Control-Allow-Headers, Access-Control-Allow-Methods, and Access-Control-Allow-Origin. Leave the Response Body for 200 to be empty
Action --> Enable CORS for the Resource
Action --> Deploy API
You have at least two distinct problems with your configuration.
First, one of your three base path mappings doesn't match the way you're trying to invoke your API. Note that the base paths don't have to be the same as your stage names, but they can be if you desire. Since your base path mappings include base paths and stage names, API Gateway is expecting the invoke path to include a base path mapping and not a stage, so it is interpreting the [glhf stable v1] portion of your path as a base path and looking for the corresponding base path mapping entry to determine the API and stage to use. This works fine for the v1 and glhf base paths which return 500 (indicating a different problem). The stable base path (in https://api.naftuli.wtf/stable/status.json) returns a 403 Forbidden because there is no base path of "stable" defined for the domain name api.naftuli.wtf. The stable stage is mapped to the "latest" base path, so calling https://api.naftuli.wtf/latest/status.json should be the way to call the stable stage. This doesn't currently work, and I don't know why. If you tell me what region your running this in, I can look-up the config and do more digging.
The second problem is indicated by the following entry from your CloudWatch logs:
Execution failed due to configuration error: statusCode should be an integer which defined in request template
Can you check that your integration request template (in the file your reference in "${file("${path.module}/files/status.json")}") contains "statusCode: 200" as a top level attribute.
I also found it surprising that you're using the same file for a request template and a response template.
Having had the identical errors I found what helped me solve this issue was to delete my OPTIONS request definition in the AWS Console. I then followed the console's "Enable CORS" form which created a new OPTIONS method.
I subsequently ran terraform plan and looked at the diff between my OPTIONS defintion and theirs. Given that the AWS Console created OPTIONS method worked I applied the changes.
Using terraform 0.12 or greater makes this possible as the terraform plan output detail is more fine grained.
I was doing this in CloudFormation.
It took me a while to get it and the accepted answer here was extremely helpful, but a little vague, so adding some more info.
Stefano Buliani's answer, in CloudFormation YAML, looks like:
RequestTemplates:
application/json: |
{ statusCode: 200 }
What was especially weird here was apparently, the fix was simply to create a deployment using the AWS CLI for each of the stages. Apparently, Terraform was not updating or re-kicking deployments on changes, and so my changes never got out.
I had a similar problem and eventually figured out that my client was using a different content type than I expected. I had foolishly assumed it would use application/json, but it was some custom json thing. In my setup, API Gateway is logging to cloudwatch, which is where I found the content type it received from the client. Once I updated the content type in the request template of the mock integration, things worked as expected.

Invoke API error - AWS API gateway

I'm trying to create an API using AWS API gateway
first I have created a resource as /sample
then created a method GET
provided Endpoint-URL and saved it.
In the Method Execution pane, select Method Request, added HTTP Request Headers as "Authorization" , added this to pass basic authentication details to back-end url since service is secured with basic authentication,
In the Method Execution pane, choose Integration Request, mapped HTTP Headers, Mapped from as "method.request.path.Authorization"
Choose Method Execution, and in the Client box, choose TEST, passed Header Authorization - Basic XXXXXX
After finished all the configuration successfully, tested the API , getting "message": "Internal server error" status code -500
For your reference my back-end service is running in the amazon-linux machine.
Checked logs:
Execution log for request test-request
Tue Sep 08 16:43:54 UTC 2015 : Starting execution for request: test-invoke-request
Tue Sep 08 16:43:54 UTC 2015 : API Key: test-invoke-api-key
Tue Sep 08 16:43:54 UTC 2015 : Method request path: {}
Tue Sep 08 16:43:54 UTC 2015 : Method request query string: {}
Tue Sep 08 16:43:54 UTC 2015 : Method request headers: {Authorization=************p1c2Vy}
Tue Sep 08 16:43:54 UTC 2015 : Method request body before transformations: null
Tue Sep 08 16:43:54 UTC 2015 : Execution failed due to configuration error: Invalid endpoint address
Could you please let me know how to resolve this issue?
try method.request.header.Authorization
Varun is right, your mapping expression is wrong.
The expression format for parameters in the request is "method.request.[source].[name]" where source is path/querystring/header and name is the name of the parameter as defined in the method request.
For the integration response, the format is the same exception you'd replace request with response and also note that only headers are available to map in the response.
If you want a quick fix just to start and get your API worked then follow these steps:
Steps
Login to AWS console
Go to "API Gateway" dashboard, select the resource(API) you need to invoke then select the method underneath (GET/POST/...)
On the method execution workflow, click on the "Method Response panel" and add status code 200 then you can add some headers for that.
Choose the "Response Body" and add "Application/json" with "Empty" model.
You should also click "Integration Request panel" and uncheck "Use Lambda Proxy integration" [as per attached image]
Last step
deploy your API into the stage(dev/test/prod)

Ambari - Unsupported or invalid service in stack

I am trying to install a custom service using these instructions and these commands to add the service. When I issue the curl command, instead of getting added, I get this error:
HTTP/1.1 400 Bad Request
Set-Cookie: AMBARISESSIONID=ID;Path=/
Expires: Thu, 01 Jan 1970 00:00:00 GMT
Content-Type: text/plain
Content-Length: 139
Server: Jetty(7.6.7.v20120910)
{
"status" : 400,
"message" : "Unsupported or invalid service in stack, clusterName=MahiMahi, serviceName=TESTSRV, stackInfo=HDP-2.1"
}
What is going on here? My cluster is installed perfectly and I can see the dashboard and metrics and stuff. Just can’t seem to add a custom service. Please help out. Thanks!
I figured out what was wrong. When using Ambari 1.6.1, it automatically downloads and uses HDP 2.1. The folder structure in the link says: cd /var/lib/ambari-server/resources/stacks/HDP/2.0.6/services. I changed it to: cd /var/lib/ambari-server/resources/stacks/HDP/2.1/services. Problem solved. So stupid. So simple. So much frustration. Such hopeless documentation.