How to perform CRUD on 3rd-party Custom Resource for which go api is not available - kubernetes

I am working on Opersator-SDK. In my operator controller, I want to perform CRUD operation on a Custom Resource (say ExampleCR) for which go api module is not available
Suppose ExampleCR does not have go api (I have access to crd definition in yaml). I am watching Deployment object and whenever Deployment object is created or updated. I want to perform following operation on ExampleCR in my controller code.
kubectl create on ExampleCR
kubectl update on ExampleCR
kubectl get on ExampleCR

I was able to solve this using unstructured.Unstructured type.
Using the following sample, you can watch the CR (ExampleCR) in the controller (Ref).
// You can also watch unstructured objects
u := &unstructured.Unstructured{}
u.SetGroupVersionKind(schema.GroupVersionKind{
Kind: "ExampleCR",
Group: "",
Version: "version", // set version here
})
//watch for primary resource
err = c.Watch(&source.Kind{Type: u}, &handler.EnqueueRequestForObject{})
//watch for secondary resource
err = c.Watch(&source.Kind{Type: u}, &handler.EnqueueRequestForOwner{
IsController: true,
OwnerType: &ownerVersion.OwnerType{}})
Once you have done this, controller will receive the reconciliation request.
CRUD operation will remain same as we do it for other kinds (for example Pod).
creation of object can be done using following
func newExampleCR() (*unstructured.Unstructured)
&unstructured.Unstructured{
Object: map[string]interface{}{
"apiVersion": "version", //set version here
"kind": "ExampleCR", // set CR name here
"metadata": map[string]interface{}{
"name": "demo-deployment",
},
"spec": map[string]interface{}{
//spec goes here
},
},
}
}
Complete example can be found here for deployment object
NOTE: You have to make sure that the CRD is registered with the scheme before the manager is started.

Related

Cannot create Topic with ARM to Service Bus Namespaces with Geo-Redundant Disaster Recovery

I have created "Service Bus Namespaces with Geo-Redundant Disaster Recovery", which creates 2 premium namespaces with 1 units each as it should. https://github.com/Azure/azure-quickstart-templates/tree/master/101-servicebus-create-namespace-geo-recoveryconfiguration
Then I try to create Topic, but failing. I like to create with own ARM so that any day I can add new Topics. I would like to create several topics here.
This ARM seems to try create new namespace while I would like to use existing namespace created earlier.
https://github.com/Azure/azure-quickstart-templates/tree/master/101-servicebus-topic
New-AzResourceGroupDeployment : 11.05.49 - Resource Microsoft.ServiceBus/namespaces 'sb-namepace-a' failed with message '{
"error": {
"message": "SKU change invalid for ServiceBus namespace. Cannot downgrade premium namespace. CorrelationId: 1111f842-1ddf-417a-a302-
829b6445e30c",
"code": "BadRequest"
}
}'
the error pretty clearly says - you are trying to change the SKU. add the SKU part back and it should work:
"sku": {
"name": "Premium",
"tier": "Premium",
"capacity": 4
},

Kubernetes crd failed to be created using go-client interface

I created a Kubernetes CRD following the example at https://github.com/kubernetes/sample-controller.
My controller works fine, and I can listen on the create/update/delete events of my CRD. Until I tried to create an object using go-client interface.
This is my CRD.
type MyEndpoint struct {
metav1.TypeMeta `json:",inline"`
// Standard object's metadata.
// More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata
// +optional
metav1.ObjectMeta `json:"metadata,omitempty" protobuf:"bytes,1,opt,name=metadata"`
}
I can create the CRD definition and create object using kubectl without any problems. But I got failure when I use following code to create the object.
myepDeploy := &crdv1.MyEndpoint{
TypeMeta: metav1.TypeMeta{
Kind: "MyEndpoint",
APIVersion: "mydom.k8s.io/v1",
},
ObjectMeta: metav1.ObjectMeta{
Name: podName,
Labels: map[string]string{
"serviceName": serviceName,
"nodeIP": nodeName,
"port": "5000"
},
},
}
epClient := myclientset.MycontrollerV1().MyEndpoints("default")
epClient.Create(myepDeploy)
But I got following error:
object *v1.MyEndpoint does not implement the protobuf marshalling interface and cannot be encoded to a protobuf message
I take a look at other standard types, I don't see if they implemented such interface. I searched on google, but not getting any luck.
Any ideas? Please help. BTW, I am running on minikube.
For most common types and for simple types marshalling works out of the box. In case of more complex structure, you may need to implement marshalling interface manually.
You may try to comment part of the MyEndpoint structure to find out what exactly caused the problem.
This error is occurred when your client epClient trying to marshal the MyEndpoint object to protobuf. This is because of your rest client config. Try setting Content Type is "application/json".
If you are using below code to generate config, then change the content type.
cfg, err := clientcmd.BuildConfigFromFlags(masterURL, kubeconfig)
if err != nil {
glog.Fatalf("Error building kubeconfig: %s", err.Error())
}
cfg.ContentType = "application/json"
kubeClient, err := kubernetes.NewForConfig(cfg)
if err != nil {
glog.Fatalf("Error building kubernetes clientset: %s", err.Error())
}

How to deploy an opsworks application by cloudformation?

In a cloudformation template, I create an opsworks stack, a layer, an instance and an application. This template sets up and configures the instance by a chef cookbook of recipes and scripts. How can I deploy the application automatically from the template without clicking manually on deploy inside the stack ? After the deploy the defined Deloy recipes from the cookbook are being executed:
"MyLayer": {
"Type": "AWS::OpsWorks::Layer",
"DependsOn" : "OpsWorksServiceRole",
"Properties": {
"AutoAssignElasticIps" : false,
"AutoAssignPublicIps" : true,
"CustomRecipes" : {
"Setup" : ["cassandra::setup","awscli::setup","settings::setup"],
"Deploy": ["imports::deploy"]
},
"CustomSecurityGroupIds" : { "Ref" : "SecurityGroupIds" },
"EnableAutoHealing" : true,
"InstallUpdatesOnBoot": false,
"LifecycleEventConfiguration": {
"ShutdownEventConfiguration": {
"DelayUntilElbConnectionsDrained": false,
"ExecutionTimeout": 120 }
},
"Name": "script-node",
"Shortname" : "node",
"StackId": { "Ref": "MyStack" },
"Type": "custom",
"UseEbsOptimizedInstances": true,
"VolumeConfigurations": [ {
"Iops": 10000,
"MountPoint": "/dev/sda1",
"NumberOfDisks": 1,
"Size": 20,
"VolumeType": "gp2"
}]
}
}
An application looks like this:
Any idea ?
Thank you.
The CreateDeployment API call generates a one-off event that executes the Deploy actions within your OpsWorks stack. I don't think any official CloudFormation resource maps to this directly, but here are some ideas on how to call it within the context of a CloudFormation template:
Write a Custom Resource that calls CreateDeployment (e.g., via the AWS SDK for Node.js) when created.
Add an AWS::CodePipeline::Pipeline resource to your template that's configured to deploy your OpsWorks app as part of a Deploy Stage. See Using AWS CodePipeline with AWS OpsWorks Stacks for documentation on this integration. (Though it's an extra service + layer of complexity, I think CodePipeline is a better layer of abstraction for modeling deployment actions in your application stack anyway.)
I believe this can be done within the recipes. So in your recipes you'll have a function to validate the app name and if it exists then proceed with the deployment.
For example your deploy recipe would look something like this:
if validator(node[:app][:name]) == true
do whatever
end
and this validator function can reside in your chef library:
def validator(app_name)
app = search("aws_opsworks_app", "name:#{app_name}").first
if app[:deploy] == true
Chef::Log.warn("PROCEEDING: Deploy initiated for #{app[:name]}")
end
end

Kubernetes rest api to check if namespace is created and active

I call the below rest api with post body to create a namespace in kubernetes
http://kuberneteshot/api/v1/namespaces/
{
"apiVersion": "v1",
"kind": "Namespace",
"metadata": {
"name": "testnamespace"
}
}
In response i get the http status 201 created and the below json response
{
"kind": "Namespace",
"apiVersion": "v1",
"metadata": {
"name": "testnamespace",
"selfLink": "/api/v1/namespaces/testnamespace",
"uid": "701ff75e-5781-11e6-a48a-74dbd1a0fb73",
"resourceVersion": "57825525",
"creationTimestamp": "2016-08-01T00:46:52Z",
},
"spec": {
"finalizers": [
"kubernetes"
]
},
"status": {
"phase": "Active"
}
}
Does the status in response with phase as Active mean the namespace is successfully created and active ?
Is there any other rest api to check if the namespace exists and is active ?
The reason i would like to know if the namespace is created is because i get an error message if i fire create pod before the namespace is actually created:
Error from server: error when creating "./pod.json": pods "my-pod" is
forbidden: service account username/default was not found, retry after
the service account is created
The below works fine if i give a sleep of 5 seconds between create namespace and create pod command
kubectl delete namespace testnamepsace;kubectl create namespace
testnamepsace;sleep 5;kubectl create -f ./pod.json
--namespace=testnamepsace
If i don't give the sleep of 5 seconds i see the error message mentioned above
Apparently your Pod has a hard dependency on the default ServiceAccount, so you probably want to check it's been created instead of looking only at the namespace state. The existence of the namespace doesn't guarantee the immediate availability of your default ServiceAccount.
Some API endpoints you might want to query:
GET /api/v1/namespaces/foo/serviceaccounts/default
returns 200 with the object description if the ServiceAccount default exists in the namespace foo, 404 otherwise
GET /api/v1/serviceaccounts?fieldSelector=metadata.namespace=foo,metadata.name=default
returns 200 and a list of all ServiceAccount items in the namespace foo with name default (empty list if no object matches)
Yes, the namespace is persisted prior to being returned from the create API call. The returned object shows the status of the namespace as Active.
You can also do GET http://kubernetehost/api/v1/namespaces/testnamespace to retrieve the same information.

k8s - Kubernetes - Service Update - Error

I'm trying to update a service using :
kubectl update service my-service \
--patch='{ "apiVersion":"v1", "spec": { "selector": { "build":"2"} } }'
I receive the following Error :
Error from server: service "\"apiVersion\":\"v1\"," not found
I have tried the following :
moving the service name to the end
Removing the apiVersion
Maybe the kubectl update is not available for service ?
For now I was making my updates by simply stoping and restarting my service. But sometime, the corresponding forwarding-port changes. So it seems to not be the good choice ...
PS:
v0.19
api_v1
I am not sure if patch is 100% working yet, but if you are going to do this, you at least need to put apiVersion inside metadata, like so:
--patch='{ metadata:{ "apiVersion":"v1" }, "spec": { "selector": { "build":"2"} } }'