How to restart kubernetes deployment via nodejs library - kubernetes

I'm trying to restart my kubernetes deployment via the kubernetes api using the
#kubernetes/client-node Library. I'm not using deployment scale because i only need one deployment (db and service container) per app.
I also tried to restart a single container inside the deployment via exec (/sbin/reboot or kill), but it seems to not work with the nodejs library because it fails to upgrade to websocket connection, what is needed by the kubernetes exec endpoint as it seems. The other idea was to restart the whole deployment by setting the scale to 0 and then 1 again. But I dont get it working via the nodejs library. I tried to find an example for that, but was not successful.
The rolling restart is not working for me, becuase my application doesnt support multiple instances.
i tried it like this to scale
await k8sApi.patchNamespacedDeploymentScale(`mydeployment-name`, 'default', {
spec: { replicas: 0 },
});
await k8sApi.patchNamespacedDeploymentScale(`mydeployment-name`, 'default', {
spec: { replicas: 1 },
});
and to reboot the container i tried this
await coreV1Api.connectPostNamespacedPodExec(
podName,
'default',
'/sbin/reboot',
'web',
false,
false,
false,
false
);
Extra input:
When trying to use patchNamespacedDeployment i get the following error back by kubernetes api:
statusCode: 415,
statusMessage: 'Unsupported Media Type',
And response body:
V1Scale {
apiVersion: 'v1',
kind: 'Status',
metadata: V1ObjectMeta {
annotations: undefined,
clusterName: undefined,
creationTimestamp: undefined,
deletionGracePeriodSeconds: undefined,
deletionTimestamp: undefined,
finalizers: undefined,
generateName: undefined,
generation: undefined,
labels: undefined,
managedFields: undefined,
name: undefined,
namespace: undefined,
ownerReferences: undefined,
resourceVersion: undefined,
selfLink: undefined,
uid: undefined
},
spec: undefined,
status: V1ScaleStatus { replicas: undefined, selector: undefined }
when trying the exec approach i get the following response:
kind: 'Status',
apiVersion: 'v1',
metadata: {},
status: 'Failure',
message: 'Upgrade request required',
reason: 'BadRequest',
code: 400
i already looked the upgrade request error up, and it seems like the library isnt aware of this, because the library was generated from function footprints or something, so it is not aware of websockets.

Really seems like there is a bug in the node Kubernetes client library.
On PATCH requests it should set the content type to "application/json-patch+json" but instead it sends the content type as "application/json".
Thats why you get unsupported media type back by the api.
Furthermore you need to use the JSON Patch format for the body you send: http://jsonpatch.com
To manually set the content type you can pass custom headers to the function call.
This worked for me:
const patch = [
{
op: 'replace',
path: '/spec/replicas',
value: 0,
},
];
await k8sApi.patchNamespacedDeployment(
`mydeployment-name`,
'default',
patch,
undefined,
undefined,
undefined,
undefined,
{ headers: { 'content-type': 'application/json-patch+json' } }
);
After some google searching I found that this problem is already existing since 2018: https://github.com/kubernetes-client/javascript/issues/19

Related

Programmatically issuing a Kubernetes certificate

I was able to manually create a certificate:
I created a csr file
I created and applied a CertificateSigningRequest k8s resource
I approved the request using
kubectl certificate approve <name>
I extracted the certificate from the CertificateSigningRequest's status.certificate field.
Now I want to repeat the process programmatically. I'm using the #kubernetes/client-node npm package for this purpose.
I'm able to create and apply the CertificateSigningRequest resource:
const csrResource = await adminCertApi.createCertificateSigningRequest({
metadata: {
name: 'my.email#my.company.com',
},
spec: {
request: csrBase64,
signerName: 'kubernetes.io/kube-apiserver-client',
usages: [
'client auth'
],
},
});
But then I get stuck trying to approve the request (trying to follow the documentation). I tried several variations that look like this:
csrResource.body.status.conditions = [
{
message: 'Approved by CWAdmin GraphQL Lambda function',
reason: 'ApprovedByCWAdmin',
type: 'Approved',
}
];
const response = await adminCertApi.patchCertificateSigningRequest('my.email#my.company.com', csrResource.body, undefined, undefined, undefined, undefined, { headers: { 'Content-Type': 'application/strategic-merge-patch+json' } });
Unfortunately, this does not update the status.conditions field. Even if it did, what triggers the signing of the certificate? The documentation states that the kube-controller-manager never auto-approves requests of type kubernetes.io/kube-apiserver-client.
In other words, what is the programmatic equivalent of kubectl certificate approve?
I found this bit of documentation that helped me solve the issue:
status is required and must be True, False, or Unknown
Approved/Denied conditions can only be set via the /approval subresource
So I added the status field to the condition and changed the API call to patchCertificateSigningRequestApproval.
The working code now looks like this:
const body = {
status: {
conditions: [
{
message: 'Approved by CWAdmin GraphQL Lambda function',
reason: 'ApprovedByCWAdmin',
type: 'Approved',
status: 'True',
}
]
}
};
const response = await adminCertApi.patchCertificateSigningRequestApproval('my.email#my.company.com', body, undefined, undefined, undefined, undefined, { headers: { 'Content-Type': 'application/strategic-merge-patch+json' } });

Is it possible to get total number of message in SQS?

I see there are 2 separate metrics ApproximateNumberOfMessagesVisible and ApproximateNumberOfMessagesNotVisible.
Using number of messages visible causes processing pods to get triggered for termination immediately after they pick up the message from queue, as they're no longer visible. If I use number of messages not visible, it will not scale up.
I'm trying to scale a kubernetes service using horizontal pod autoscaler and external metric from SQS. Here is template external metric:
apiVersion: metrics.aws/v1alpha1
kind: ExternalMetric
metadata:
name: metric-name
spec:
name: metric-name
queries:
- id: metric_name
metricStat:
metric:
namespace: "AWS/SQS"
metricName: "ApproximateNumberOfMessagesVisible"
dimensions:
- name: QueueName
value: "queue_name"
period: 60
stat: Average
unit: Count
returnData: true
Here is HPA template:
kind: HorizontalPodAutoscaler
apiVersion: autoscaling/v2beta1
metadata:
name: hpa-name
spec:
scaleTargetRef:
apiVersion: apps/v1beta1
kind: Deployment
name: deployment-name
minReplicas: 1
maxReplicas: 50
metrics:
- type: External
external:
metricName: metric-name
targetAverageValue: 1
The problem will be solved if I can define another custom metric that is a sum of these two metrics, how else can I solve this problem?
We used a lambda to fetch two metrics and publish a custom metric that is sum of messages in-flight and waiting, and trigger this lambda using cloudwatch events at whatever frequency you want, https://console.aws.amazon.com/cloudwatch/home?region=us-east-1#rules:action=create
Here is lambda code for reference:
const AWS = require('aws-sdk');
const cloudwatch = new AWS.CloudWatch({region: ''}); // fill region here
const sqs = new AWS.SQS();
const SQS_URL = ''; // fill queue url here
async function getSqsMetric(queueUrl) {
var params = {
QueueUrl: queueUrl,
AttributeNames: ['All']
};
return new Promise((res, rej) => {
sqs.getQueueAttributes(params, function(err, data) {
if (err) rej(err);
else res(data);
});
})
}
function buildMetric(numMessages) {
return {
Namespace: 'yourcompany-custom-metrics',
MetricData: [{
MetricName: 'mymetric',
Dimensions: [{
Name: 'env',
Value: 'prod'
}],
Timestamp: new Date(),
Unit: 'Count',
Value: numMessages
}]
}
}
async function pushMetrics(metrics) {
await new Promise((res) => cloudwatch.putMetricData(metrics, (err, data) => {
if (err) {
console.log('err', err, err.stack); // an error occurred
res(err);
} else {
console.log('response', data); // successful response
res(data);
}
}));
}
exports.handler = async (event) => {
console.log('Started');
const sqsMetrics = await getSqsMetric(SQS_URL).catch(console.error);
var queueSize = null;
if (sqsMetrics) {
console.log('Got sqsMetrics', sqsMetrics);
if (sqsMetrics.Attributes) {
queueSize = parseInt(sqsMetrics.Attributes.ApproximateNumberOfMessages) + parseInt(sqsMetrics.Attributes.ApproximateNumberOfMessagesNotVisible);
console.log('Pushing', queueSize);
await pushMetrics(buildMetric(queueSize))
}
} else {
console.log('Failed fetching sqsMetrics');
}
const response = {
statusCode: 200,
body: JSON.stringify('Pushed ' + queueSize),
};
return response;
};
This seems to be a case of Thrashing - the number of replicas keeps fluctuating frequently due to the dynamic nature of the metrics evaluated.
IMHO, you've got a couple of options here.
You could look at adding a StabilizationWindow to your HPA and also probably limit the scale down rate. You'd have to try a few combination of metrics and see what works best for you as you'd best know the nature of metrics (ApproximateNumberOfMessagesVisible in this case) you see in your infrastructure.

How to edit a pulumi resource after it's been declared

I've declared a kubernetes deployment like:
const ledgerDeployment = new k8s.extensions.v1beta1.Deployment("ledger", {
spec: {
template: {
metadata: {
labels: {name: "ledger"},
name: "ledger",
// namespace: namespace,
},
spec: {
containers: [
...
],
volumes: [
{
emptyDir: {},
name: "gunicorn-socket-dir"
}
]
}
}
}
});
Later on in my index.ts I want to conditionally modify the volumes of the deployment. I think this is a quirk of pulumi I haven't wrapped my head around but here's my current attempt:
if(myCondition) {
ledgerDeployment.spec.template.spec.volumes.apply(volumes =>
volumes.push(
{
name: "certificates",
secret: {
items: [
{key: "tls.key", path: "proxykey"},
{key: "tls.crt", path: "proxycert"}],
secretName: "star.builds.qwil.co"
}
})
)
)
When I do this I get the following error: Property 'mode' is missing in type '{ key: string; path: string; }' but required in type 'KeyToPath'
I suspect I'm using apply incorrectly. When I try to directly modify ledgerDeployment.spec.template.spec.volumes.push() I get an error Property 'push' does not exist on type 'Output<Volume[]>'.
What is the pattern for modifying resources in Pulumi? How can I add a new volume to my deployment?
It's not possible to modify the resource inputs after you created the resource. Instead, you should place all the logic that defines the shape of inputs before you call the constructor.
In your example, this could be:
let volumes = [
{
emptyDir: {},
name: "gunicorn-socket-dir"
}
]
if (myCondition) {
volumes.push({...});
}
const ledgerDeployment = new k8s.extensions.v1beta1.Deployment("ledger", {
// <-- use `volumes` here
});

Can't get sails-hook-validate to work with Sails v1.0?

I am having an issue getting validation error messages to attach to the error object in sails v1.0. I am using the sails-hook-validate module.
User model:
module.exports = {
attributes: {
name: {
type: 'string',
required: true,
}
},
validationMessages: {
name: {
required: 'Name is required'
},
},
};
Running User.create in the sails console:
sails> User.create({}).exec(err => console.log(err.toJSON()));
{ error: 'E_UNKNOWN',
status: 500,
summary: 'Encountered an unexpected error',
Errors: undefined }
It appears sails-hook-validate is modifying the error object in some way, but it doesn't seem to be adding my custom error message in any way. Does anybody know how to get sails-hook-validate to work in Sails v1.0?
Sails v1 dramatically changed how validation errors are formatted and sails-hook-validate hasn't been updated to handle Sails v1 yet.
Sails-hook-validate is a third party hook and I dont think its been updated to work with Sails V1.
As #jeffery mentioned the structure of validation errors did change slightly in Sails V1 but there could be other changes that are effecting this hook.

Unable to create a new Kubanetes deployment using node 'kubernetes-client'

const k8s = require('kubernetes-client');
const endpoint = 'https://' + IP;
const ext = new k8s.Extensions({
url: endpoint,
version: 'v1beta1',
insecureSkipTlsVerify: true,
namespace,
auth: {
bearer: token,
},
});
const body = {
spec: {
template: {
spec: {
metadata: [{
name,
image,
}]
}
}
}
};
ext.namespaces.deployments(name).put({body}, (err, response => { console.log(response); })
The above functions seem to authenticate with GET and PUSH, however I get the following error message when using POST.
the server does not allow this method on the requested resource
Blockquote
I think the problem might be, that due to changes of Kubernetes 1.6 to RCAB your pod has not the right privileges to schedule pods, get logs, ... through the API server.
Make sure you are using the admin.conf kubeconfig.
But be aware that giving the node cluster admin permissions sets anyone who can access the node to cluster admin ;)