I need to prioritize pod creation on nodes according a given node label. I used CalculateNodeLabelPriority to enforce the rules I need. However when I start the kube-scheduler I get following error
F0217 10:52:18.020751 3198 plugins.go:198] Invalid configuration: Priority type not found for CalculateNodeLabelPriority
I looked at master and I can see that CalculateNodeLabelPriority priority is not registered in the defaultPriorities
https://github.com/kubernetes/kubernetes/blob/master/plugin/pkg/scheduler/algorithmprovider/defaults/defaults.go
Why isn't it registered although is mentioned in priorities.go? Is there a way I can register it?
Managed to fix the issue by adding a argument variable. Code checks for the argument object in order to accept the Priority object
https://github.com/kubernetes/kubernetes/blob/release-1.1/plugin/pkg/scheduler/factory/plugins.go#L98
{
"predicates": [
{
"name": "HostName"
},
{
"name": "MatchNodeSelector"
},
{
"name": "PodFitsResources"
}
],
"priorities": [
{
"name": "NewNodeLabelPriority",
"weight": 1,
"argument": {
"labelPreference": {
"label": "pdc",
"presence": true
}
}
}
]
}
Related
I have an application that runs a code and at the end it sends an email with a report of the data. When I deploy pods on GKE , certain pods get terminated and a new pod is created due to Auto Scale, but the problem is that the termination is done after my code is finished and the email is sent twice for the same data.
Here is the JSON file of the deploy API:
{
"apiVersion": "batch/v1",
"kind": "Job",
"metadata": {
"name": "$name",
"namespace": "$namespace"
},
"spec": {
"template": {
"metadata": {
"name": "********"
},
"spec": {
"priorityClassName": "high-priority",
"containers": [
{
"name": "******",
"image": "$dockerScancatalogueImageRepo",
"imagePullPolicy": "IfNotPresent",
"env": $env,
"resources": {
"requests": {
"memory": "2000Mi",
"cpu": "2000m"
},
"limits":{
"memory":"2650Mi",
"cpu":"2650m"
}
}
}
],
"imagePullSecrets": [
{
"name": "docker-secret"
}
],
"restartPolicy": "Never"
}
}
}
}
and here is a screen-shot of the pod events:
Any idea how to fix that?
Thank you in advance.
"Perhaps you are affected by this "Note that even if you specify .spec.parallelism = 1 and .spec.completions = 1 and .spec.template.spec.restartPolicy = "Never", the same program may sometimes be started twice." from doc. What happens if you increase terminationgraceperiodseconds in your yaml file? – "
#danyL
my problem was that I had another jobs that deploy pods on my nodes with more priority , so it was trying to terminate my running pods but the job was already done and the email was already sent , so i fixed the problem by fixing the request and the limit resources on all my json files , i don't know if it's the perfect solution but for now it solved my problem.
Thank you all for you help
I'm cross-posting the question from here.
I’m interested in knowing whether it’s possible to fetch all the statuses for all the contexts for a given reference using the GQL API.
The query that I’m currently doing is the following:
{
repository(owner: "owner", name: "name") {
pullRequests(headRefName: "head-ref", last: 1) {
nodes {
id
commits(first: 10) {
nodes {
commit {
oid
status {
contexts {
context
createdAt
id
description
state
}
}
}
}
}
}
}
}
}
This query returns a single status for each status context, and those are the last ones for each:
{
"data": {
"repository": {
"pullRequests": {
"nodes": [
{
"id": "some-id",
"commits": {
"nodes": [
{
"commit": {
"oid": "some-oid",
"status": {
"contexts": [
{
"context": "context-1",
"createdAt": "2021-07-06T21:28:26Z",
"id": "***",
"description": "Your tests passed!",
"state": "SUCCESS"
},
{
"context": "context-2",
"createdAt": "2021-07-06T21:25:26Z",
"id": "***",
"description": "Your tests passed!",
"state": "SUCCESS"
},
]
}
}
}
]
}
}
]
}
}
}
}
On the other hand, if I use the REST API with this query:
curl -i -u se7entyse7en:$(cat token) https://api.github.com/repos/owner/name/commits/some-oid/statuses
where some-oid is the corresponding retrieved with the GQL API, the output contains ALL the statuses. In particular, I can see all the statuses of context-1 and context-2 that happened before those that are returned by the GQL API.
It seems a limitation of the GQL schema given that StatusContext is a node instead of being a list of nodes. Basically, I expect StatusContext to be of type [Status!]! where Status represents a single status for the given context.
Am I missing something? Is this something expected to be changed in the future? Is the REST API the only option?
Thanks.
I opened a support ticket and this is the expected behavior indeed, there are no plans for changing it. The only solution is to use the REST API.
The link to the community forum is this one.
My goal is to create dataproc workflow template from python code. Meanwhile I want to have ability to parametrize placement.managedCluster.config.gceClusterConfig.subnetworkUri field during template instantiation.
I read template from json file like:
{
"id": "bigquery-extractor",
"placement": {
"managed_cluster": {
"config": {
"gce_cluster_config": {
"subnetwork_uri": "some-subnet-name"
},
"software_config" : {
"image_version": "1.5"
}
},
"cluster_name": "some-name"
}
},
"jobs": [
{
"pyspark_job": {
"args": [
"job_argument"
],
"main_python_file_uri": "gs:///path-to-file"
},
"step_id": "extract"
}
],
"parameters": [
{
"name": "CLUSTER_NAME",
"fields": [
"placement.managedCluster.clusterName"
]
},
{
"name": "SUBNETWORK_URI",
"fields": [
"placement.managedCluster.config.gceClusterConfig.subnetworkUri"
]
},
{
"name": "MAIN_PY_FILE",
"fields": [
"jobs['extract'].pysparkJob.mainPythonFileUri"
]
},
{
"name": "JOB_ARGUMENT",
"fields": [
"jobs['extract'].pysparkJob.args[0]"
]
}
]
}
code snippet I use:
options = ClientOptions(api_endpoint="{}-dataproc.googleapis.com:443".format(region))
client = dataproc.WorkflowTemplateServiceClient(client_options=options)
template_file = open(path_to_file, "r")
template_dict = eval(template_file.read())
print(template_dict)
template = dataproc.WorkflowTemplate(template_dict)
full_region_id = "projects/{project_id}/regions/{region}".format(project_id=project_id, region=region)
try:
client.create_workflow_template(
parent=full_region_id,
template=template
)
except AlreadyExists as err:
print(err)
pass
when I try to run this code I get the following error:
google.api_core.exceptions.InvalidArgument: 400 Invalid field path placement.managed_cluster.configuration.gce_cluster_config.subnetwork_uri: Field gce_cluster_config does not exist.
This behavior is the same also if I try to parametrize placement.managedCluster.config.softwareConfig.imageVersion, I will get
google.api_core.exceptions.InvalidArgument: 400 Invalid field path placement.managed_cluster.configuration.software_config.image_version: Field software_config does not exist.
But if I exclude any field under placement.managedCluster.config from parameters map, template is created successfully.
I didn't find any restriction on parametrizing these fields. Is there any? Or is it just me doing something wrong?
This doc listed the parameterizable fields. It seems that only managedCluster.name of managedCluster is parameterizable:
Managed cluster name. Dataproc will use the user-supplied name as the name prefix, and append random characters to create a unique cluster name. The cluster is deleted at the end of the workflow.
I don't see managedCluster.config parameterizable.
What thresholds should be set in Service Fabric Placement / Load balancing config for Cluster with large number of guest executable applications?
I am having trouble with Service Fabric trying to place too many services onto a single node too fast.
To give an example of cluster size, there are 2-4 worker node types, there are 3-6 worker nodes per node type, each node type may run 200 guest executable applications, and each application will have at least 2 replicas. The nodes are more than capable of running the services while running, it is just startup time where CPU is too high.
The problem seems to be the thresholds or defaults for placement and load balancing rules set in the cluster config. As examples of what I have tried: I have turned on InBuildThrottlingEnabled and set InBuildThrottlingGlobalMaxValue to 100, I have set the Global Movement Throttle settings to be various percentages of the total application count.
At this point there are two distinct scenarios I am trying to solve for. In both cases, the nodes go to 100% for an amount of time such that service fabric declares the node as down.
1st: Starting an entire cluster from all nodes being off without overwhelming nodes.
2nd: A single node being overwhelmed by too many services starting after a host comes back online
Here are my current parameters on the cluster:
"Name": "PlacementAndLoadBalancing",
"Parameters": [
{
"Name": "UseMoveCostReports",
"Value": "true"
},
{
"Name": "PLBRefreshGap",
"Value": "1"
},
{
"Name": "MinPlacementInterval",
"Value": "30.0"
},
{
"Name": "MinLoadBalancingInterval",
"Value": "30.0"
},
{
"Name": "MinConstraintCheckInterval",
"Value": "30.0"
},
{
"Name": "GlobalMovementThrottleThresholdForPlacement",
"Value": "25"
},
{
"Name": "GlobalMovementThrottleThresholdForBalancing",
"Value": "25"
},
{
"Name": "GlobalMovementThrottleThreshold",
"Value": "25"
},
{
"Name": "GlobalMovementThrottleCountingInterval",
"Value": "450"
},
{
"Name": "InBuildThrottlingEnabled",
"Value": "false"
},
{
"Name": "InBuildThrottlingGlobalMaxValue",
"Value": "100"
}
]
},
Based on discussion in answer below, wanted to leave a graph-image: if a node goes down, the act of shuffling services on to the remaining nodes will cause a second node to go down, as noted here. Green node goes down, then purple goes down due to too many resources being shuffled onto it.
From SF's perspective, 1 & 2 are the same problem. Also as a note, SF doesn't evict a node just because CPU consumption is high. So: "The nodes go to 100% for an amount of time such that service fabric declares the node as down." needs some more explanation. The machines might be failing for other reasons, or I guess could be so loaded that the kernel level failure detectors can't ping other machines, but that isn't very common.
For config changes: I would remove all of these to go with the defaults
{
"Name": "PLBRefreshGap",
"Value": "1"
},
{
"Name": "MinPlacementInterval",
"Value": "30.0"
},
{
"Name": "MinLoadBalancingInterval",
"Value": "30.0"
},
{
"Name": "MinConstraintCheckInterval",
"Value": "30.0"
},
For the inbuild throttle to work, this needs to flip to true:
{
"Name": "InBuildThrottlingEnabled",
"Value": "false"
},
Also, since these are likely constraint violations and placement (not proactive rebalancing) we need to explicitly instruct SF to throttle those operations as well. There is config for this in SF, although it is not documented or publicly supported at this time, you can see it in the settings. By default only balancing is throttled, but you should be able to turn on throttling for all phases and set appropriate limits via something like the below.
These first two settings are also within PlacementAndLoadBalancing, like the ones above.
{
"Name": "ThrottlePlacementPhase",
"Value": "true"
},
{
"Name": "ThrottleConstraintCheckPhase",
"Value": "true"
},
These next settings to set the limits are in their own sections, and are a map of the different node type names to the limit you want to throttle for that node type.
{
"name": "MaximumInBuildReplicasPerNodeConstraintCheckThrottle",
"parameters": [
{
"name": "YourNodeTypeNameHere",
"value": "100"
},
{
"name": "YourOtherNodeTypeNameHere",
"value": "100"
}
]
},
{
"name": "MaximumInBuildReplicasPerNodePlacementThrottle",
"parameters": [
{
"name": "YourNodeTypeNameHere",
"value": "100"
},
{
"name": "YourOtherNodeTypeNameHere",
"value": "100"
}
]
},
{
"name": "MaximumInBuildReplicasPerNodeBalancingThrottle",
"parameters": [
{
"name": "YourNodeTypeNameHere",
"value": "100"
},
{
"name": "YourOtherNodeTypeNameHere",
"value": "100"
}
]
},
{
"name": "MaximumInBuildReplicasPerNode",
"parameters": [
{
"name": "YourNodeTypeNameHere",
"value": "100"
},
{
"name": "YourOtherNodeTypeNameHere",
"value": "100"
}
]
}
I would make these changes and then try again. Additional information like what is actually causing the nodes to be down (confirmed via events and SF health info) would help identify the source of the problem. It would probably also be good to verify that starting 100 instances of the apps on the node actually works and whether that's an appropriate threshold.
I"m trying to string together multiple IBM Watson requests:
Request #1: Play music.
Watson responds with the following:
{
"intents": [
{
"intent": "turn_on",
"confidence": 0.9498470783233643
}
],
"entities": [
{
"entity": "appliance",
"location": [
5,
10
],
"value": "radio",
"confidence": 1
}
],
"input": {
"text": "play music"
},
"output": {
"text": [
"What kind of music would you like to hear?"
],
"nodes_visited": [
"node_1_1510258504338",
"node_2_1510258615227"
],
"log_messages": []
},
"context": {
"conversation_id": "79e93cac-12bb-40fa-ab69-88f56d0845e4",
"system": {
"dialog_stack": [
{
"dialog_node": "node_2_1510258615227"
}
],
"dialog_turn_counter": 1,
"dialog_request_counter": 1,
"_node_output_map": {
"node_2_1510258615227": [
0
]
}
}
}
}
Request #2: The patron would type rock.
My problem is that I'm getting an error message that states the following
No dialog node matched for the input at a root level. (and there is 1 more warning in the log)",
"log_messages": [
I'm pretty sure I have to pass a context into the 2nd request but I'm not sure what I need to include. Right now I'm only passing in the conversation_id. is there something specific from the above response that I need to pass in? For example, I'm passing this:
{
"input": {
"text": "rock"
},
"context": {
"conversation_id": "79e93cac-12bb-40fa-ab69-88f56d0845e4"
}
}
You send back your whole context object. In this case it would be:
{
"input": {
"text": "rock"
},
"context": {
"conversation_id": "79e93cac-12bb-40fa-ab69-88f56d0845e4",
"system": {
"dialog_stack": [
{
"dialog_node": "node_2_1510258615227"
}
],
"dialog_turn_counter": 1,
"dialog_request_counter": 1,
"_node_output_map": {
"node_2_1510258615227": [
0
]
}
}
}
}
But there are SDK's that will make this easier for you.
https://github.com/watson-developer-cloud
Your node that actions the type of music people select, is it a child of your 'turn_on' node [node_2_1510258615227]?
If so, as Simon demonstrates above, you need to also pass back as part of the API call the complete context packet. This informs Watson Conversation where in the dialog flow you were last. As the conversation system is state free, i.e. It does not store any state information about individual conversations, it will not by default know where it is within a conversation. This is why you then need to return the context element of the previous response, to allow watson to know where you were in the conversation flow.
Your error above states that Watson looked down the list of dialog nodes you have defined at your root level, and could not find a matching condition. Due to the fact your matching condition was within a child node.