When I run the below code using client-go library I get an inscrutable error? What am I doing wrong?
ctx := context.TODO()
ns := applycorev1.NamespaceApplyConfiguration{
ObjectMetaApplyConfiguration: &applymetav1.ObjectMetaApplyConfiguration{
Name: to.StringPtr("foobar"),
},
}
if _, err := kubeClient.CoreV1().Namespaces().Apply(ctx, &ns, v1.ApplyOptions{}); err != nil {
panic(err)
}
Yields the very unhelpful error:
panic: PatchOptions.meta.k8s.io "" is invalid: fieldManager: Required value: is required for apply patch
What is the correct way to send an Apply operation to the API server in Kube using client-go?
At least you should add FieldManager in your ApplyOptions
I am also trying this out, for now I am referring to https://ymmt2005.hatenablog.com/entry/2020/04/14/An_example_of_using_dynamic_client_of_k8s.io/client-go
Related
I have been using kubebuilder for writing custom controller, and aware of Get(), Update(), Delete() methods that it provides. But Now I am looking for a method which mimic the behaviour of kubectl rollout restart deployment. If there is no such direct method then I am looking for correct way to mimic the same.
type CustomReconciler struct {
client.Client
Log logr.Logger
Scheme *runtime.Scheme
}
func (r *CustomReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
configMap := &v1.ConfigMap{}
err = r.Get(ctx, req.namespacedName, configMap)
if err != nil {
logger.Error(err, "Failed to GET configMap")
return ctrl.Result{}, err
}
Say in above code I read a deployment name from the configmap and rollout restart the same as follows:
val := configMap.Data["config.yml"]
config := Config{}
if err := yaml.Unmarshal([]byte(val), &config); err != nil {
logger.Error(err, "failed to unmarshal config data")
return ctrl.Result{}, err
}
// Need equivalent of following
// r.RolloutRestart(config.DeploymentName)
In all cases where you wish to replicate kubectl behavior, the answer is always to increase its verbosity and it'll show you exactly -- sometimes down to the wire payloads -- what it is doing.
For rollout restart, one will find that it just bumps an annotation on the Deployment/StatefulSet/whatever and that causes the outer object to be "different," and triggering a reconciliation run
You can squat on their annotation, or you can make up your own, or you can use a label change -- practically any "meaningless" change will do
First time user of Cadence:
Scenario
I have a cadence server running in my sandbox environment.
Intent is to fetch the workflow status
I am trying to use this cadence client
go.uber.org/cadence/client
on my local host to talk to my sandbox cadence server.
This is my simple code snippet:
var cadClient client.Client
func main() {
wfID := "01ERMTDZHBYCH4GECHB3J692PC" << I got this from cadence-ui
ctx := context.Background()
wf := cadClientlient.GetWorkflow(ctx, wfID,"") <<< Panic hits here
log.Println("Workflow RunID: ",wf.GetID())
}
I am sure getting it wrong because the client does not know how to reach the cadence server.
I referred this https://cadenceworkflow.io/docs/go-client/ to find the correct usage but could not find any reference (possible that I might have missed it).
Any help in how to resolve/implement this, will be of much help
I am not sure what panic you got. Based on the code snippet, it's likely that you haven't initialized the client.
To initialize it, follow the sample code here: https://github.com/uber-common/cadence-samples/blob/master/cmd/samples/common/sample_helper.go#L82
and
https://github.com/uber-common/cadence-samples/blob/aac75c7ca03ec0c184d0f668c8cd0ea13d3a7aa4/cmd/samples/common/factory.go#L113
ch, err := tchannel.NewChannelTransport(
tchannel.ServiceName(_cadenceClientName))
if err != nil {
b.Logger.Fatal("Failed to create transport channel", zap.Error(err))
}
b.Logger.Debug("Creating RPC dispatcher outbound",
zap.String("ServiceName", _cadenceFrontendService),
zap.String("HostPort", b.hostPort))
b.dispatcher = yarpc.NewDispatcher(yarpc.Config{
Name: _cadenceClientName,
Outbounds: yarpc.Outbounds{
_cadenceFrontendService: {Unary: ch.NewSingleOutbound(b.hostPort)},
},
})
if b.dispatcher != nil {
if err := b.dispatcher.Start(); err != nil {
b.Logger.Fatal("Failed to create outbound transport channel: %v", zap.Error(err))
client := workflowserviceclient.New(b.dispatcher.ClientConfig(_cadenceFrontendService))
I'm trying to modify the Spec of non-owned objects as part of the Reconcile of my Custom Resource, but it seems like it ignores any fields that are not primitives. I am using controller-runtime.
I figured since it was only working on primitives, maybe it's an issue related to DeepCopy. However, removing it did not solve the issue, and I read that any Updates on objects have to be on deep copies to avoid messing up the cache.
I also tried setting client.FieldOwner(...) since it says that that's required for Updates that are done server-side. I wasn't sure what to set it to, so I made it req.NamespacedName.String(). That did not work either.
Here is the Reconcile loop for my controller:
func (r *MyCustomObjectReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
// ...
var myCustomObject customv1.MyCustomObject
if err := r.Get(ctx, req.NamespacedName, &myCustomObject); err != nil {
log.Error(err, "unable to fetch ReleaseDefinition")
return ctrl.Result{}, client.IgnoreNotFound(err)
}
// ...
deployList := &kappsv1.DeploymentList{}
labels := map[string]string{
"mylabel": myCustomObject.Name,
}
if err := r.List(ctx, deployList, client.MatchingLabels(labels)); err != nil {
log.Error(err, "unable to fetch Deployments")
return ctrl.Result{}, err
}
// make a deep copy to avoid messing up the cache (used by other controllers)
myCustomObjectSpec := myCustomObject.Spec.DeepCopy()
// the two fields of my CRD that affect the Deployments
port := myCustomObjectSpec.Port // type: *int32
customenv := myCustomObjectSpec.CustomEnv // type: map[string]string
for _, dep := range deployList.Items {
newDeploy := dep.DeepCopy() // already returns a pointer
// Do these things:
// 1. replace first container's containerPort with myCustomObjectSpec.Port
// 2. replace first container's Env with values from myCustomObjectSpec.CustomEnv
// 3. Update the Deployment
container := newDeploy.Spec.Template.Spec.Containers[0]
// 1. Replace container's port
container.Ports[0].ContainerPort = *port
envVars := make([]kcorev1.EnvVar, 0, len(customenv))
for key, val := range customenv {
envVars = append(envVars, kcorev1.EnvVar{
Name: key,
Value: val,
})
}
// 2. Replace container's Env variables
container.Env = envVars
// 3. Perform update for deployment (port works, env gets ignored)
if err := r.Update(ctx, newDeploy); err != nil {
log.Error(err, "unable to update deployment", "deployment", dep.Name)
return ctrl.Result{}, err
}
}
return ctrl.Result{}, nil
}
The Spec for my CRD looks like:
// MyCustomObjectSpec defines the desired state of MyCustomObject
type MyCustomObjectSpec struct {
// CustomEnv is a list of environment variables to set in the containers.
// +optional
CustomEnv map[string]string `json:"customEnv,omitempty"`
// Port is the port that the backend container is listening on.
// +optional
Port *int32 `json:"port,omitempty"`
}
I expected that when I kubectl apply a new CR with changes to the Port and CustomEnv fields, it would modify the deployments as described in Reconcile. However, only the Port is updated, and the changes to the container's Env are ignored.
The problem was that I needed a pointer to the Container I was modifying.
Doing this instead worked:
container := &newDeploy.Spec.Template.Spec.Containers[0]
I'm using client-go for Kubernetes and trying to get the API url of the current cluster, i.e. something similar to the output of kubectl cluster-info.
I found a function called getCluster:
func (config *DirectClientConfig) ClientConfig() (*restclient.Config, error) {
// check that getAuthInfo, getContext, and getCluster do not return an error.
// Do this before checking if the current config is usable in the event that an
// AuthInfo, Context, or Cluster config with user-defined names are not found.
// This provides a user with the immediate cause for error if one is found
configAuthInfo, err := config.getAuthInfo()
if err != nil {
return nil, err
}
_, err = config.getContext()
if err != nil {
return nil, err
}
configClusterInfo, err := config.getCluster()
if err != nil {
return nil, err
}
...
}
When I write the following in my code
config, err := clientcmd.BuildConfigFromFlags("", kubeconfig)
clusterInfo, err := config.getCluster()
I get the error config.getCluster undefined (type *rest.Config has no field or method getCluster)
How can I use this function? Is there any other way to get the this url?
From the link you provided it seems like you need to use clusterInfo, err := config.getCluster() instead of configAuthInfo.getCluster()
$ kubectl cluster-info
1. Kubernetes master is running at https://10.156.0.3:6443
2. KubeDNS is running at https://10.156.0.3:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
If you run kubectl cluster-info -v8 you can see the following:
The first line is basicaly taken from your ~/.kube/config file and kubectl just checked if it works, with the simple GET request, searching for something, that should be definitely present in the cluster:
I0222 11:21:18.015482 18150 round_trippers.go:416] GET https://10.156.0.3:6443/api/v1/namespaces/kube-system/services?labelSelector=kubernetes.io%2Fcluster-service%3Dtrue
You can get the result of this command by starting kubectl proxy and then, in other console:
curl http://127.0.0.1:8001/api/v1/namespaces/kube-system/services?labelSelector=kubernetes.io%2Fcluster-service%3Dtrue
As you can see in the response, there are no lines with master URL there.
So, to get the value specified in the first line of the output of kubectl cluster-info you just need to read correct part of the Kubernetes config file, because you can have several cluster configuration there.
To read and deserialize Kubernetes config file there is a function in the loader.go:
// LoadFromFile takes a filename and deserializes the contents into Config object
func LoadFromFile(filename string) (*clientcmdapi.Config, error)
or another function in the config.go:
// getConfigFromFile tries to read a kubeconfig file and if it can't, returns an error. One exception, missing files result in empty configs, not an error.
func getConfigFromFile(filename string) (*clientcmdapi.Config, error)
Having trouble figuring out what is wrong. I have a remote kubernetes cluster up and have copied the config locally. I know it is correct because I have gotten other commands to work for me.
The one I can't get to work is a deployment patch. My code:
const namespace = "default"
var clientset *kubernetes.Clientset
func init() {
kubeconfig := "/Users/$USER/go/k8s-api/config"
config, err := clientcmd.BuildConfigFromFlags("", kubeconfig)
if err != nil {
log.Fatal(err)
}
// create the clientset
clientset, err = kubernetes.NewForConfig(config)
if err != nil {
panic(err.Error())
}
}
func main() {
deploymentsClient := clientset.ExtensionsV1beta1().Deployments("default")
patch := []byte(`[{"spec":{"template":{"spec":{"containers":[{"name":"my-deploy-test","image":"$ORG/$REPO:my-deploy0.0.1"}]}}}}]`)
res, err := deploymentsClient.Patch("my-deploy", types.JSONPatchType, patch)
if err != nil {
panic(err)
}
fmt.Println(res)
}
All I get back is:
panic: the server rejected our request due to an error in our request
Any help appreciated, thanks!
You have mixed up JSONPatchType with MergePatchType; JSONPatchType wants the input to be RFC 6902 formatted "commands", and in that case can be a JSON array, because there can be multiple commands applied in order to the input document
However, your payload looks much closer to you wanting MergePatchType, in which case the input should not be a JSON array because the source document is not an array of "spec" objects.
Thus, I'd bet just dropping the leading [ and trailing ], changing the argument to be types.MergePatchType will get you much further along
Actually you should use types.StrategicMergePatchType and remove leading([) and trailing(]) parenthesis from patching string.
Merge-patch: With a JSON merge patch, if you want to update a list, you have to specify the entire new list. And the new list completely replaces the existing list.
Strategic-merge-patch: With a strategic merge patch, a list is either replaced or merged depending on its patch strategy. The patch strategy is specified by the value of the patchStrategy key in a field tag in the Kubernetes source code. For example, the Containers field of PodSpec struct has a patchStrategy of merge:
type PodSpec struct {
...
Containers []Container `json:"containers" patchStrategy:"merge" patchMergeKey:"name" ...`
N.B: kubectl by-default uses strategic merge patch to patch kubernetes resources.