A question regarding resource allocation in Kubernetes - kubernetes

I'm trying to find out how kubernetes calculates the allocation of resources? Actually, I cannot find it in the source code.
In kubernetes official documentation, allocatable has been calculated as [Allocatable] = [Node Capacity] - [Kube-Reserved] - [System-Reserved] - [Hard-Eviction-Threshold]. Could you please help me to find the related source codes in kubernetes which is in github?
Actually, I would like to change the allocation policy in kubernetes and I need to find the related codes.
Cheers

There are a couple of options:
The scheduler uses the value of node.Status.Allocatable instead of node.Status.Capacity to decide if a node will become a candidate for pod scheduling. So one thing is to do custom stuff , is to bypass the schedular and specify your own schedular.
The second option is to change the values and options used by kubelet. details
You can set these in the kubeletArguments section of the node
configuration map by using a set of
= pairs (e.g.,
cpu=200m,memory=512Mi). Add the section if it does not already exist
Maybe the last option you are looking for is to change the code , the way things are calculated.
https://github.com/kubernetes/kubernetes/blob/05183bffe5cf690b418718aa107f5655e4ac0618/pkg/scheduler/nodeinfo/node_info.go
start from here:
// AllocatableResource returns allocatable resources on a given node.
func (n *NodeInfo) AllocatableResource() Resource {
if n == nil {
return emptyResource
}
return *n.allocatableResource
}
here is a portion of schedular that uses that info:
if allocatable.Memory < podRequest.Memory+nodeInfo.RequestedResource().Memory {
predicateFails = append(predicateFails, NewInsufficientResourceError(v1.ResourceMemory, podRequest.Memory, nodeInfo.RequestedResource().Memory, allocatable.Memory))
}
https://github.com/kubernetes/kubernetes/blob/788f24583e95ac47938a41daaf1f1efc58153738/pkg/scheduler/algorithm/predicates/predicates.go

Related

Resolution error: Cannot use resource 'x' in a cross-environment fashion, the resource's physical name must be explicit set

I'm trying to pass an ecs cluster from one stack to another stack.
I get this error:
Error: Resolution error: Resolution error: Resolution error: Cannot use resource 'BackendAPIStack/BackendAPICluster' in a cross-environment fashion, the resource's physical name must be explicit set or use `PhysicalName.GENERATE_IF_NEEDED`.
The cluster is defined as below in BackendAPIStack:
this.cluster = new ecs.Cluster(this, 'BackendAPICluster', {
vpc: this.vpc
});
The stacks are defined as follows:
const backendAPIStack = new BackendAPIStack(app, `BackendAPIStack${settingsForThisEnv.stackVersion}`, {
env: {
account: process.env.CDK_DEFAULT_ACCOUNT,
region: process.env.CDK_DEFAULT_REGION
},
digicallPolicyQueue: digicallPolicyQueue,
environmentName,
...settingsForThisEnv
});
const metabaseStack = new MetabaseStack(app, 'MetabaseStack', backendAPIStack.vpc, backendAPIStack.cluster, {
vpc: backendAPIStack.vpc,
cluster: backendAPIStack.cluster
});
metabaseStack.addDependency(backendAPIStack);
Here's the constructor for metabaseStack:
constructor(scope: cdk.Construct, id: string, vpc: ec2.Vpc, cluster: ecs.Cluster, props: MetabaseStackProps) {
super(scope, id, props);
console.log('cluster', cluster)
this.vpc = vpc;
this.cluster = cluster;
this.setupMetabase()
}
and then I'm using the cluster here:
const metabaseService = new ecs_patterns.ApplicationLoadBalancedFargateService(this, 'Metabase', {
assignPublicIp: false,
cluster: this.cluster,
...
I can't find documentation on how to do what I'm trying to do.
You're creating a Region/Account-specific Stack with BackendAPIStack because you're binding the stack to a specific account and region via the env prop value.
Then you're creating a Region/Account-agnostic stack by creating the MetabaseStack without any env prop value.
In general, having two independent stacks like this is fine, but here you're linking them together by passing a reference from the BackendAPIStack to the MetabaseStack, which won't work.
This is a problem because CDK normally links Stacks together by performing Stack Exports and Imports of values, but CloudFormation does not support cross-region or cross-account Stack references
So, your possible solutions are:
(A) Set up your MetabaseStack to use the same account/region as your BackendAPIStack
Under the hood this will setup the Cluster's ARN to be a Stack export from BackendAPICluster and then MetabaseStack will be able to import it.
(B1) Create BackendAPICluster with a clusterName that you pick.
i.e. new Cluster(..., {vpc: this.vpc, clusterName: 'backendCluster' })
By not providing a name, you're using the default of "CloudFormation-generated name" which is the basis of the issue that CDK is reporting, albeit it in a confusing way.
When you do provide a name, then the ARN for the cluster is deterministic (not picked by CloudFormation at deployment time) so CDK then has enough information at build time to determine what the Cluster's ARN will be and can provide that to your MetabaseStack.
(B2) Create BackendAPICluster with a clusterName and let CDK pick
This is done by setting the clusterName to PhysicalName.GENERATE_IF_NEEDED
i.e. new Cluster(..., {clusterName: PhysicalName.GENERATE_IF_NEEDED })
PhysicalName.GENERATE_IF_NEEDED is a marker that indicates that a physical (name) will only be generated by the CDK if it is needed for cross-environment references. Otherwise, it will be allocated by CloudFormation.
This is what the error is trying to tell you, but I didn't understand it either...
If possible, I would go with (A). I suspect it was just an oversight anyway that you weren't passing the same env values to the MetabaseStack and you probably want both of these stacks in the same region to reduce latency and all that.
If not, then I would personally then go with (B2) next because I try to not give any of my resources explicit names unless they are part of some contract with another group. I.e. Assume the role named 'ServiceWorker' in Account XYZ or Download the data from Bucket 'ABC'.

How to remove clustering from vis.js

I have clustering set on a vis.js network diagram. Adding nodes to cluster works. But I cannot remove a node from cluster. I believe the problem is that the first time the code below runs it creates a cluster, after I do some modification to the nodes (e.g. remove node from group) and I run it the second time it keeps the previous cluster and just adds the nodes (if any was added) but doesn't remove them (if any was removed).
So I think that removing all the cluster options and the applying it again should do the trick, but I cant find the way to achieve that.
const clusterOption = {
joinCondition: function (childOptions) {
return childOptions.cid === group.groupId;
},
clusterNodeProperties: {
id: group.groupId,
label: group.label,
shape: 'database',
allowSingleNodeCluster: true
}
};
this.network.cluster(clusterOption);
So my idea would be to do something along the following lines (in pseudocode) before calling the above code.
this.network.clearClusters();
Running this.network.cluster(clusterOption); again seems to work in my experiments (see http://jsfiddle.net/thomaash/t8q37Lsc/, though I adapted one of the examples since you didn't provide MWE). But this may be a bug. It seems to me that the cluster should be updated when the underlying nodes are updated. Do you plan to open an issue (https://github.com/visjs/vis-network/issues/new)?
PS: What version are you using? People sometimes miss an update and then try to solve issues that are resolved in newer versions.
function decluster() {
for (index of network.body.nodeIndices) {
if (network.isCluster(index) == true) {
network.openCluster(index);
}
}
}
The network instance has a "isCluster" method which takes a node index as a parameter, that you can invoke to identify if a given node is a cluster. Once you have identified that a given node is a cluster, you can then invoke the "openCluster" method on the same network instance with the node index.
Below is a code snippet for a Vue3 implementation.
unclusterNodes() {
for (const index of this.network.body.nodeIndices) {
if (this.network.isCluster(index) == true) {
this.network.openCluster(index);
}
}

Error using CLI for cloud functions with IAM namespaces

I'm trying to create an IBM Cloud Function web action from some python code. This code has a dependency which isn't in the runtime, so I've followed the steps here to package the dependency with my code. I now need to create the action on the cloud for this package, using the steps described here. I've got several issues.
The first is that I want to check that this will be going into the right namespace. However though I have several, none are showing up when i do ibmcloud fn namespace list, I just get the empty table with headers. I checked that I was targeting the right region using ibmcloud target -r eu-gb.
The second is that when I try to bypass the problem above by creating a namespace from the command line using ibmcloud fn namespace create nyNamespaceName, it works, but I then check on the web UI, and this new namespace has been created in the Dallas region instead of the London one… I can’t seem to get it to create a namespace in the region that I am currently targeting for some reason, it’s always Dallas.
The third problem is that when I try to follow the steps 2 and 3 from here regardless, accepting that it will end up in the unwanted Dallas namespace, by running the equivalent of ibmcloud fn action create demo/hello <filepath>/hello.js --web true, it keeps telling me I need to target an org and a space. But my namespace is an IAM namespace, it doesn’t have an org and a space, so there are none to give?
Please let me know if I’m missing something obvious or have misunderstood something, because to me it feels like the CLI is not respecting the targeting of a region and not handling IAM stuff correctly.
Edit: adding code as suggested, but this code runs fine locally, it's the CLI part that I'm struggling with?
import sys
import requests
import pandas as pd
import json
from ibm_ai_openscale import APIClient
def main(dict):
# Get AI Openscale GUID
AIOS_GUID = None
token_data = {
'grant_type': 'urn:ibm:params:oauth:grant-type:apikey',
'response_type': 'cloud_iam',
'apikey': 'SOMEAPIKEYHERE'
}
response = requests.post('https://iam.bluemix.net/identity/token', data=token_data)
iam_token = response.json()['access_token']
iam_headers = {
'Content-Type': 'application/json',
'Authorization': 'Bearer %s' % iam_token
}
resources = json.loads(requests.get('https://resource-controller.cloud.ibm.com/v2/resource_instances', headers=iam_headers).text)['resources']
for resource in resources:
if "aiopenscale" in resource['id'].lower():
AIOS_GUID = resource['guid']
AIOS_CREDENTIALS = {
"instance_guid": AIOS_GUID,
"apikey": 'SOMEAPIKEYHERE',
"url": "https://api.aiopenscale.cloud.ibm.com"
}
if AIOS_GUID is None:
print('AI OpenScale GUID NOT FOUND')
else:
print('AI OpenScale FOUND')
#GET OPENSCALE SUBSCRIPTION
ai_client = APIClient(aios_credentials=AIOS_CREDENTIALS)
subscriptions_uids = ai_client.data_mart.subscriptions.get_uids()
for sub in subscriptions_uids:
if ai_client.data_mart.subscriptions.get_details(sub)['entity']['asset']['name'] == "MYMODELNAME":
subscription = ai_client.data_mart.subscriptions.get(sub)
#EXPLAINABILITY TEST
sample_transaction_id="SAMPLEID"
run_details = subscription.explainability.run(transaction_id=sample_transaction_id, cem=False)
#Formating results
run_details_json = json.dumps(run_details)
return run_details_json
I know the OP said they were 'targeting the right region'. But I want to make it clear that the 'right region' is the exact region in which the namespaces you want to list or target are located.
Unless you target this region, you won't be able to list or target any of those namespaces.
This is counterintuitive because
You are able to list Service IDs of namespaces in regions other than the one you are targeting.
The web portal allows you to see namespaces in all regions, so why shouldn't the CLI?
I was having an issue very similar to the OP's first problem, but once I targeted the correct region it worked fine.

Find partition key of stateful service at runtime

I need to find the current partition key of a Service Fabric Stateful Service at run time.
I have looked in the ICodePackageActivationContext and the StatefulServiceContext but can't seem to see this information anywhere.
Edit:
As LoekD pointed out in his answer this information is available from within the StatefulService class. Just to be explicitly clear, here is how I accessed:
var info = (Int64RangePartitionInformation) this.Partition.PartitionInfo;
var highKey = info.HighKey;
var lowKey = info.LowKey;
From within the service itself, you can use the Partition.PartitionInfo property.

CQ5 / AEM5.6 Workflow: Access workflow instance properties from inside OR Split

TL;DR version:
In CQ workflows, is there a difference between what's available to the OR Split compared to the Process Step?
Is it possible to access the /history/ nodes of a workflow instance from within an OR Split?
How?!
The whole story:
I'm working on a workflow in CQ5 / AEM5.6.
In this workflow I have a custom dialog, which stores a couple of properties on the workflow instance.
The path to the property I'm having trouble with is: /workflow/instances/[this instance]/history/[workItem id]/workItem/metaData and I've called the property "reject-or-approve".
The dialog sets the property fine (via a dropdown that lets you set it to "reject" or "approve"), and I can access other properties on this node via a process step (in ecma script) using:
var actionReason;
var history = workflowSession.getHistory(workItem.getWorkflow());
// loop backwards through workItems
// and as soon as we find a Action Reason that is not empty
// store that as 'actionReason' and break.
for (var index = history.size() - 1; index >= 0; index--) {
var previous = history.get(index);
var tempActionReason = previous.getWorkItem().getMetaDataMap().get('action-message');
if ((tempActionReason != '')&&(tempActionReason != null)) {
actionReason = tempActionReason;
break;
}
}
The process step is not the problem though. Where I'm having trouble is when I try to do the same thing from inside an OR Split.
When I try the same workflowSession.getHistory(workItem.getWorkflow()) in an OR Split, it throws an error saying workItem is not defined.
I've tried storing this property on the payload instead (i.e. storing it under the page's jcr:content), and in that case the property does seem to be available to the OR Split, but my problems with that are:
This reject-or-approve property is only relevant to the current workflow instance, so storing it on the page's jcr:content doesn't really make sense. jcr:content properties will persist after the workflow is closed, and will be accessible to future workflow instances. I could work around this (i.e. don't let workflows do anything based on the property unless I'm sure this instance has written to the property already), but this doesn't feel right and is probably error-prone.
For some reason, when running through the custom dialog in my workflow, only the Admin user group seems to be able to write to the jcr:content property. When I use the dialog as any other user group (which I need to do for this workflow design), the dialog looks as though it's working, but never actually writes to the jcr:content property.
So for a couple of different reasons I'd rather keep this property local to the workflow instance instead of storing it on the page's jcr:content -- however, if anyone can think of a reason why my dialog isn't setting the property on the jcr:content when I use any group other than admin, that would give me a workaround even if it's not exactly the solution I'm looking for.
Thanks in advance if anyone can help! I know this is kind of obscure, but I've been stuck on it for ages.
a couple of days ago i ran into the same issue. The issue here is that you don't have the workItem object, because you don't really have an existing workItem. Imagine the following: you are going through the workflow, you got a couple of workItems, with means, either process step, either inbox item. When you are in an or split, you don't have existing workItems, you can ensure by visiting the /workItems node of the workflow instance. Your workaround seems to be the only way to go through this "issue".
I've solved it. It's not all that elegant looking, but it seems to be a pretty solid solution.
Here's some background:
Dialogs seem to reliably let you store properties either on:
the payload's jcr:content node (which wasn't practical for me, because the payload is locked during the workflow, and doesn't let non-admins write to its jcr:content)
the workItem/metaData for the current workflow step
However, Split steps don't have access to workItem. I found a fairly un-helpful confirmation of that here: http://blogs.adobe.com/dmcmahon/2013/03/26/cq5-failure-running-script-etcworkflowscriptscaworkitem-ecma-referenceerror-workitem-is-not-defined/
So basically the issue was, the Dialog step could store the property, but the OR Split couldn't access it.
My workaround was to add a Process step straight after the Dialog in my workflow. Process steps do have access to workItem, so they can read the property set by the Dialog. I never particularly wanted to store this data on the payload's jcr:content, so I looked for another location. It turns out the workflow metaData (at the top level of the workflow instance node, rather than workItem/metaData, which is inside the /history sub-node) is accessible to both the Process step and the OR Split. So, my Process step now reads the workItem's approveReject property (set by the Dialog), and then writes it to the workflow's metaData node. Then, the OR Split reads the property from its new location, and does its magic.
The way you access the workflow metaData from the Process step and the OR Split is not consistent, but you can get there from both.
Here's some code: (complete with comments. You're welcome)
In the dialog where you choose to approve or reject, the name of the field is set to rejectApprove. There's no ./ or anything before it. This tells it to store the property on the workItem/metaData node for the current workflow step under /history/.
Straight after the dialog, a Process step runs this:
var rejectApprove;
var history = workflowSession.getHistory(workItem.getWorkflow());
// loop backwards through workItems
// and as soon as we find a rejectApprove that is not empty
// store that as 'rejectApprove' and break.
for (var index = history.size() - 1; index >= 0; index--) {
var previous = history.get(index);
var tempRejectApprove = previous.getWorkItem().getMetaDataMap().get('rejectApprove');
if ((tempRejectApprove != '')&&(tempRejectApprove != null)) {
rejectApprove = tempRejectApprove;
break;
}
}
// steps up from the workflow step into the workflow metaData,
// and stores the rejectApprove property there
// (where it can be accessed by an OR Split)
workItem.getWorkflowData().getMetaData().put('rejectApprove', rejectApprove);
Then after the Process step, the OR Split has the following in its tabs:
function check() {
var match = 'approve';
if (workflowData.getMetaData().get('rejectApprove') == match) {
return true;
} else {
return false;
}
}
Note: use this for the tab for the "approve" path, then copy it and replace var match = 'approve' with var match = 'reject'
So the key here is that from a Process step:
workItem.getWorkflowData().getMetaData().put('rejectApprove', rejectApprove);
writes to the same property that:
workflowData.getMetaData().get('rejectApprove') reads from when you execute it in an OR Split.
To suit our business requirements, there's more to the workflow I've implemented than just this, but the method above seems to be a pretty reliable way to get values that are entered in a dialog, and access them from within an OR Split.
It seems pretty silly that the OR Split can't access the workItem directly, and I'd be interested to know if there's a less roundabout way of doing this, but for now this has solved my problem.
I really hope someone else has this same problem, and finds this useful, because it took me waaay to long to figure out, to only apply it once!