I was able to manually create a certificate:
I created a csr file
I created and applied a CertificateSigningRequest k8s resource
I approved the request using
kubectl certificate approve <name>
I extracted the certificate from the CertificateSigningRequest's status.certificate field.
Now I want to repeat the process programmatically. I'm using the #kubernetes/client-node npm package for this purpose.
I'm able to create and apply the CertificateSigningRequest resource:
const csrResource = await adminCertApi.createCertificateSigningRequest({
metadata: {
name: 'my.email#my.company.com',
},
spec: {
request: csrBase64,
signerName: 'kubernetes.io/kube-apiserver-client',
usages: [
'client auth'
],
},
});
But then I get stuck trying to approve the request (trying to follow the documentation). I tried several variations that look like this:
csrResource.body.status.conditions = [
{
message: 'Approved by CWAdmin GraphQL Lambda function',
reason: 'ApprovedByCWAdmin',
type: 'Approved',
}
];
const response = await adminCertApi.patchCertificateSigningRequest('my.email#my.company.com', csrResource.body, undefined, undefined, undefined, undefined, { headers: { 'Content-Type': 'application/strategic-merge-patch+json' } });
Unfortunately, this does not update the status.conditions field. Even if it did, what triggers the signing of the certificate? The documentation states that the kube-controller-manager never auto-approves requests of type kubernetes.io/kube-apiserver-client.
In other words, what is the programmatic equivalent of kubectl certificate approve?
I found this bit of documentation that helped me solve the issue:
status is required and must be True, False, or Unknown
Approved/Denied conditions can only be set via the /approval subresource
So I added the status field to the condition and changed the API call to patchCertificateSigningRequestApproval.
The working code now looks like this:
const body = {
status: {
conditions: [
{
message: 'Approved by CWAdmin GraphQL Lambda function',
reason: 'ApprovedByCWAdmin',
type: 'Approved',
status: 'True',
}
]
}
};
const response = await adminCertApi.patchCertificateSigningRequestApproval('my.email#my.company.com', body, undefined, undefined, undefined, undefined, { headers: { 'Content-Type': 'application/strategic-merge-patch+json' } });
Related
I recently posted a question regarding creating a document in a collection in the local emulator suite using a HTTP Post Request with Axios.
Creating new document with Firestore REST API and Local Emulator Suite, Returning Error 404: Problem with Path Parameter
My previous error was a 404 error with the URL path parameter. Since making changes to that, now I’m receiving a 400 error. The following is the code:
First, using a post request to create an authenticated user ID token and storing that in a variable. No issues here.
//HTTP Post Request to create an auth ID, storing ID Token in variable
const createUserResponse = await axios.post(createUserInstance.url, createUserInstance.data, createUserInstance.config);
const userIdToken = createUserResponse.data.idToken;
const userLocalId = createUserResponse.data.localId;
console.log(userIdToken);
console.log(userLocalId);
Second, writing the request body. I’ve modified the URL which works now. I’ve also re-modified the data body to make sure it’s in the correct format.
Wondering if the issue is in:
the way I’ve written the query parameters (do I need to specific the new document name? is the API key the problem?)
the formatting of the data request body : according to the documentation a new document is automatically generated. I’d presume fields would be read as containing the key-value pairs to include in the document fields.
the formatting of the headers. I’ve checked and re-checked this. My userIDToken contains a string
//create a document in the user collection
//request body
const createDocumentInstance : createPostRequest = {
url: 'http://localhost:8080/v1beta1/projects/okane-crud-dev/databases/(default)/documents/test?key=<API_KEY>',
data: {
'fields': {
'localId': 'hello',
}
},
//directly pasted IdToken as using the variable resulted in problem with ' ' error
config: {
'headers':
{
'Content-Type': 'application/json',
'Authorization': `Bearer ${userIdToken}`,
}
}};
To make sure of what I was looking at, I logged the entire request in my console. This is what it looks like.
console.log
{
url: 'http://localhost:8080/v1beta1/projects/okane-crud-dev/databases/(default)/documents/test?key=AIzaSyCQSnirvajGL5Uok34OgEn7tF1S_tp5sa0',
data: { fields: { localId: 'hello' } },
config: {
headers: {
'Content-Type': 'application/json',
Authorization: 'Bearer eyJhbGciOiJub25lIiwidHlwIjoiSldUIn0.eyJlbWFpbCI6Im15ZW1haWxAZW1haWwuY29tIiwiZW1haWxfdmVyaWZpZWQiOmZhbHNlLCJhdXRoX3RpbWUiOjE2NjU0NTQ2MDgsInVzZXJfaWQiOiI1Vmt3TUtRc1k0THJRTkRWaXpFYmdnYnExOVNyIiwiZmlyZWJhc2UiOnsiaWRlbnRpdGllcyI6eyJlbWFpbCI6WyJteWVtYWlsQGVtYWlsLmNvbSJdfSwic2lnbl9pbl9wcm92aWRlciI6InBhc3N3b3JkIn0sImlhdCI6MTY2NTQ1NDYwOCwiZXhwIjoxNjY1NDU4MjA4LCJhdWQiOiJva2FuZS1jcnVkLWRldiIsImlzcyI6Imh0dHBzOi8vc2VjdXJldG9rZW4uZ29vZ2xlLmNvbS9va2FuZS1jcnVkLWRldiIsInN1YiI6IjVWa3dNS1FzWTRMclFORFZpekViZ2dicTE5U3IifQ.'
}
}
}
Finally, making the post request on Axios. Exact same syntax as my previous post request.
//Post Request to create a document
const createDocument = await axios.post(createDocumentInstance.url, createDocumentInstance.data, createDocumentInstance.config);
const docReference = createDocument.data;
console.log(docReference);
When running this, the following error is returned:
{
message: 'Request failed with status code 400',
name: 'Error',
description: undefined,
number: undefined,
fileName: undefined,
lineNumber: undefined,
columnNumber: undefined,
stack: 'Error: Request failed with status code 400\n' +
' at createError (/Users/georgettekoo/Documents/Code/Okane/Okane-Firebase-Backend-Deprecated/functions/node_modules/axios/lib/core/createError.js:16:15)\n' +
' at settle (/Users/georgettekoo/Documents/Code/Okane/Okane-Firebase-Backend-Deprecated/functions/node_modules/axios/lib/core/settle.js:17:12)\n' +
' at IncomingMessage.handleStreamEnd (/Users/georgettekoo/Documents/Code/Okane/Okane-Firebase-Backend-Deprecated/functions/node_modules/axios/lib/adapters/http.js:293:11)\n' +
' at IncomingMessage.emit (node:events:539:35)\n' +
' at endReadableNT (node:internal/streams/readable:1344:12)\n' +
' at processTicksAndRejections (node:internal/process/task_queues:82:21)',
config: {
transitional: {
silentJSONParsing: true,
forcedJSONParsing: true,
clarifyTimeoutError: false
},
adapter: [Function: httpAdapter],
transformRequest: [ [Function: transformRequest] ],
transformResponse: [ [Function: transformResponse] ],
timeout: 0,
xsrfCookieName: 'XSRF-TOKEN',
xsrfHeaderName: 'X-XSRF-TOKEN',
maxContentLength: -1,
maxBodyLength: -1,
validateStatus: [Function: validateStatus],
headers: {
Accept: 'application/json, text/plain, */*',
'Content-Type': 'application/json',
Authorization: 'Bearer eyJhbGciOiJub25lIiwidHlwIjoiSldUIn0.eyJlbWFpbCI6Im15ZW1haWxAZW1haWwuY29tIiwiZW1haWxfdmVyaWZpZWQiOmZhbHNlLCJhdXRoX3RpbWUiOjE2NjU2MzAwMTUsInVzZXJfaWQiOiJEMTBoblpsek9nQWR0ZlJlNm1VUDBOY2ZtNm5pIiwiZmlyZWJhc2UiOnsiaWRlbnRpdGllcyI6eyJlbWFpbCI6WyJteWVtYWlsQGVtYWlsLmNvbSJdfSwic2lnbl9pbl9wcm92aWRlciI6InBhc3N3b3JkIn0sImlhdCI6MTY2NTYzMDAxNSwiZXhwIjoxNjY1NjMzNjE1LCJhdWQiOiJva2FuZS1jcnVkLWRldiIsImlzcyI6Imh0dHBzOi8vc2VjdXJldG9rZW4uZ29vZ2xlLmNvbS9va2FuZS1jcnVkLWRldiIsInN1YiI6IkQxMGhuWmx6T2dBZHRmUmU2bVVQME5jZm02bmkifQ.',
'User-Agent': 'axios/0.24.0',
'Content-Length': 30
},
method: 'post',
url: 'http://localhost:8080/v1beta1/projects/okane-crud-dev/databases/(default)/documents/test?key=AIzaSyCQSnirvajGL5Uok34OgEn7tF1S_tp5sa0',
data: '{"fields":{"localId":"hello"}}'
},
code: undefined,
status: 400
}
Not sure what I'm missing and where the issue is. Does anyone have any hints?
I am receiving this error randomly when I am trying to send a request to Google Analytics API v3:
"User does not have sufficient permissions for this profile."
From every 8-10 times that I try a same request (same parameters, authentication, etc.), I receive this error only once and the other times I receive the correct response in the other times. The other strange part is that we are handling many clients and I only have seen this error for only handful of our clients.
For more background, we are using googleapis NPM package to send our Google Analytics API requests.
This is parameters that I am sending to the API:
{
params: {
auth: OAuth2Client {
_events: [Object: null prototype] {},
_eventsCount: 0,
_maxListeners: undefined,
transporter: DefaultTransporter {},
credentials: [Object],
eagerRefreshThresholdMillis: 300000,
forceRefreshOnFailure: false,
certificateCache: {},
certificateExpiry: null,
certificateCacheFormat: 'PEM',
refreshTokenPromises: Map {},
_clientId: 'XXXXX,
_clientSecret: 'XXXX',
redirectUri: 'postmessage'
},
ids: 'ga:XXXX',
metrics: 'ga:sessions,ga:bounces,ga:transactions,ga:transactionRevenue,ga:goalCompletionsAll',
dimensions: 'ga:date',
'start-date': '2021-10-01',
'end-date': '2021-10-20',
samplingLevel: 'HIGHER_PRECISION',
quotaUser: 'XXX'
}
}
new Promise((resolve, reject) => {
return google
.analytics({ version: "v3"})
.data.ga.get(params, (error, { data: response } = {}) => {
if (error) {
return reject(new Error(`Google API sent the following error: ${error}`));
}
return resolve(response);
});
})
Authentication:
const OAuth2 = google.auth.OAuth2;
const oauth2Client = new OAuth2(process.env.GOOGLE_CLIENT_ID, process.env.GOOGLE_CLIENT_SECRET, "postmessage");
oauth2Client.setCredentials(tokens);
await oauth2Client.getRequestHeaders().catch((error) => {
throw error;
});
And then passing oauth2Client in the params as auth.
I resolved the issue. In my case I was using the same object of oauth2Client for multiple API requests but was calling these lines before each request:
oauth2Client.setCredentials(tokens);
await oauth2Client.getRequestHeaders();
This could potentially change the token that I was passing in the request parameters, params, before it was being sent.
So in other words if you are sending multiple requests to the API at the same time, it is better to generate the token once and use the same token for all those requests.
I'm creating a LambdaRestApi in CDK and I want to have both CORS enabled and add an ANY proxy using the addProxy method.
I currently have the following CDK code:
const api = new LambdaRestApi(...); // This API has CORS enabled in defaultCorsPreflightOptions
const apiProxy = api.root.addProxy({
defaultMethodOptions: {
authorizationType: AuthorizationType.COGNITO,
authorizer: new CognitoUserPoolsAuthorizer(...),
}
});
The problem I'm running into is that while a proxy is created with the ANY method, it also sets the OPTIONS method to require authentication. I tried to add an OPTIONS method to the proxy using addMethod to override the authorizer but I get an error that there's already a construct with the same name. I'm also trying to avoid having to set the anyMethod field in the proxy to be false and adding my own methods. Is there a way in the API Gateway CDK to set the default authorizer to only work for any method except the OPTIONS method?
There is also a possibility to remove the auth explicitly on the OPTIONS method, here I also remove requirement for X-API-Key in the requests
const lambdaApi = new apigateway.LambdaRestApi(...) // This API has CORS enabled in defaultCorsPreflightOptions
lambdaApi.methods
.filter((method) => method.httpMethod === "OPTIONS")
.forEach((method) => {
const methodCfn = method.node.defaultChild as apigateway.CfnMethod;
methodCfn.authorizationType = apigateway.AuthorizationType.NONE;
methodCfn.authorizerId = undefined;
methodCfn.authorizationScopes = undefined;
methodCfn.apiKeyRequired = false;
});
I ran into the same issue when building a RestApi using the aws cdk. Here is a workaround where you can build the api piece by piece.
Declare the api construct without the defaultCorsPreflightOptions property, otherwise you will not be able to override Authorization on the OPTIONS method.
import * as apigateway from '#aws-cdk/aws-apigateway';
import * as lambda from '#aws-cdk/aws-lambda';
const restAPI = new apigateway.RestApi(this, "sample-api");
Add your resources and methods. In this case, I want to add an ANY method with a custom authorizer on the "data" resource. You can do this with any other supported authorization mechanism. The proxy handler is a lambda function created with NodeJs.
const dataProxyLambdaFunction = new lambda.Function(this, "data-handler", {
code: lambda.Code.fromBucket(S3CODEBUCKET, "latest/js_data.zip"),
handler: "index.handler",
runtime: lambda.Runtime.NODEJS_14_X
});
const dataProxy = restAPI.root.addResource("data")
.addResource("{proxy+}");
dataProxy.addMethod("ANY", new apigateway.LambdaIntegration(dataProxyLambdaFunction , {
allowTestInvoke: true,
}), { authorizationType: apigateway.AuthorizationType.CUSTOM, authorizer: customLambdaRequestAuthorizer });
Now, you can add an OPTIONS method to this proxy, without authorization. I used a standardCorsMockIntegration and optionsMethodResponse object to reuse with other methods in my api.
const ALLOWED_HEADERS = ['Content-Type', 'X-Amz-Date', 'X-Amz-Security-Token', 'Authorization', 'X-Api-Key', 'X-Requested-With', 'Accept', 'Access-Control-Allow-Methods', 'Access-Control-Allow-Origin', 'Access-Control-Allow-Headers'];
const standardCorsMockIntegration = new apigateway.MockIntegration({
integrationResponses: [{
statusCode: '200',
responseParameters: {
'method.response.header.Access-Control-Allow-Headers': `'${ALLOWED_HEADERS.join(",")}'`,
'method.response.header.Access-Control-Allow-Origin': "'*'",
'method.response.header.Access-Control-Allow-Credentials': "'false'",
'method.response.header.Access-Control-Allow-Methods': "'OPTIONS,GET,PUT,POST,DELETE'",
},
}],
passthroughBehavior: apigateway.PassthroughBehavior.NEVER,
requestTemplates: {
"application/json": "{\"statusCode\": 200}"
}
});
const optionsMethodResponse = {
statusCode: '200',
responseModels: {
'application/json': apigateway.Model.EMPTY_MODEL
},
responseParameters: {
'method.response.header.Access-Control-Allow-Headers': true,
'method.response.header.Access-Control-Allow-Methods': true,
'method.response.header.Access-Control-Allow-Credentials': true,
'method.response.header.Access-Control-Allow-Origin': true,
}
};
dataProxy.addMethod("OPTIONS", standardCorsMockIntegration, {
authorizationType: apigateway.AuthorizationType.NONE,
methodResponses: [
optionsMethodResponse
]
});
When you deploy the API, you can verify using the API Gateway console that your methods have been setup correctly. The proxy's ANY method has authorization enabled while the OPTIONS method does not.
Reference to the GitHub issue that helped: GitHub Issue 'apigateway: add explicit support for CORS'
I'm trying to restart my kubernetes deployment via the kubernetes api using the
#kubernetes/client-node Library. I'm not using deployment scale because i only need one deployment (db and service container) per app.
I also tried to restart a single container inside the deployment via exec (/sbin/reboot or kill), but it seems to not work with the nodejs library because it fails to upgrade to websocket connection, what is needed by the kubernetes exec endpoint as it seems. The other idea was to restart the whole deployment by setting the scale to 0 and then 1 again. But I dont get it working via the nodejs library. I tried to find an example for that, but was not successful.
The rolling restart is not working for me, becuase my application doesnt support multiple instances.
i tried it like this to scale
await k8sApi.patchNamespacedDeploymentScale(`mydeployment-name`, 'default', {
spec: { replicas: 0 },
});
await k8sApi.patchNamespacedDeploymentScale(`mydeployment-name`, 'default', {
spec: { replicas: 1 },
});
and to reboot the container i tried this
await coreV1Api.connectPostNamespacedPodExec(
podName,
'default',
'/sbin/reboot',
'web',
false,
false,
false,
false
);
Extra input:
When trying to use patchNamespacedDeployment i get the following error back by kubernetes api:
statusCode: 415,
statusMessage: 'Unsupported Media Type',
And response body:
V1Scale {
apiVersion: 'v1',
kind: 'Status',
metadata: V1ObjectMeta {
annotations: undefined,
clusterName: undefined,
creationTimestamp: undefined,
deletionGracePeriodSeconds: undefined,
deletionTimestamp: undefined,
finalizers: undefined,
generateName: undefined,
generation: undefined,
labels: undefined,
managedFields: undefined,
name: undefined,
namespace: undefined,
ownerReferences: undefined,
resourceVersion: undefined,
selfLink: undefined,
uid: undefined
},
spec: undefined,
status: V1ScaleStatus { replicas: undefined, selector: undefined }
when trying the exec approach i get the following response:
kind: 'Status',
apiVersion: 'v1',
metadata: {},
status: 'Failure',
message: 'Upgrade request required',
reason: 'BadRequest',
code: 400
i already looked the upgrade request error up, and it seems like the library isnt aware of this, because the library was generated from function footprints or something, so it is not aware of websockets.
Really seems like there is a bug in the node Kubernetes client library.
On PATCH requests it should set the content type to "application/json-patch+json" but instead it sends the content type as "application/json".
Thats why you get unsupported media type back by the api.
Furthermore you need to use the JSON Patch format for the body you send: http://jsonpatch.com
To manually set the content type you can pass custom headers to the function call.
This worked for me:
const patch = [
{
op: 'replace',
path: '/spec/replicas',
value: 0,
},
];
await k8sApi.patchNamespacedDeployment(
`mydeployment-name`,
'default',
patch,
undefined,
undefined,
undefined,
undefined,
{ headers: { 'content-type': 'application/json-patch+json' } }
);
After some google searching I found that this problem is already existing since 2018: https://github.com/kubernetes-client/javascript/issues/19
const k8s = require('kubernetes-client');
const endpoint = 'https://' + IP;
const ext = new k8s.Extensions({
url: endpoint,
version: 'v1beta1',
insecureSkipTlsVerify: true,
namespace,
auth: {
bearer: token,
},
});
const body = {
spec: {
template: {
spec: {
metadata: [{
name,
image,
}]
}
}
}
};
ext.namespaces.deployments(name).put({body}, (err, response => { console.log(response); })
The above functions seem to authenticate with GET and PUSH, however I get the following error message when using POST.
the server does not allow this method on the requested resource
Blockquote
I think the problem might be, that due to changes of Kubernetes 1.6 to RCAB your pod has not the right privileges to schedule pods, get logs, ... through the API server.
Make sure you are using the admin.conf kubeconfig.
But be aware that giving the node cluster admin permissions sets anyone who can access the node to cluster admin ;)