How to get all Kubernetes Deployment objects using kubernetes java client? - kubernetes

I am planning to write simple program using kubernetes java client (https://github.com/kubernetes-client/java/). I could get all namespaces and pods but how do i get list of deployments in a given namespace? I couldn't find any method. Is there any way to get it?
for (V1Namespace ns: namespaces.getItems()) {
System.out.println("------Begin-----");
System.out.println("Namespace: " + ns.getMetadata().getName());
V1PodList pods = api.listNamespacedPod(ns.getMetadata().getName(), null, null, null, null, null, null, null, null, null);
int count = 0;
for (V1Pod pod: pods.getItems()) {
System.out.println("Pod " + (++count) + ": " + pod.getMetadata().getName());
System.out.println("Node: " + pod.getSpec().getNodeName());
}
System.out.println("------ENd-----");
}

I guess you're looking for the following example:
public class Example {
public static void main(String[] args) {
ApiClient defaultClient = Configuration.getDefaultApiClient();
defaultClient.setBasePath("http://localhost");
// Configure API key authorization: BearerToken
ApiKeyAuth BearerToken = (ApiKeyAuth) defaultClient.getAuthentication("BearerToken");
BearerToken.setApiKey("YOUR API KEY");
// Uncomment the following line to set a prefix for the API key, e.g. "Token" (defaults to null)
//BearerToken.setApiKeyPrefix("Token");
AppsV1Api apiInstance = new AppsV1Api(defaultClient);
String namespace = "namespace_example"; // String | object name and auth scope, such as for teams and projects
String pretty = "pretty_example"; // String | If 'true', then the output is pretty printed.
Boolean allowWatchBookmarks = true; // Boolean | allowWatchBookmarks requests watch events with type \"BOOKMARK\". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. If the feature gate WatchBookmarks is not enabled in apiserver, this field is ignored.
String _continue = "_continue_example"; // String | The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a previous query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the next key, but from the latest snapshot, which is inconsistent from the previous list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the \"next key\". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications.
String fieldSelector = "fieldSelector_example"; // String | A selector to restrict the list of returned objects by their fields. Defaults to everything.
String labelSelector = "labelSelector_example"; // String | A selector to restrict the list of returned objects by their labels. Defaults to everything.
Integer limit = 56; // Integer | limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the next set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned.
String resourceVersion = "resourceVersion_example"; // String | When specified with a watch call, shows changes that occur after that particular version of a resource. Defaults to changes from the beginning of history. When specified for list: - if unset, then the result is returned from remote storage based on quorum-read flag; - if it's 0, then we simply return what we currently have in cache, no guarantee; - if set to non zero, then the result is at least as fresh as given rv.
Integer timeoutSeconds = 56; // Integer | Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity.
Boolean watch = true; // Boolean | Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion.
try {
V1DeploymentList result = apiInstance.listNamespacedDeployment(namespace, pretty, allowWatchBookmarks, _continue, fieldSelector, labelSelector, limit, resourceVersion, timeoutSeconds, watch);
System.out.println(result);
} catch (ApiException e) {
System.err.println("Exception when calling AppsV1Api#listNamespacedDeployment");
System.err.println("Status code: " + e.getCode());
System.err.println("Reason: " + e.getResponseBody());
System.err.println("Response headers: " + e.getResponseHeaders());
e.printStackTrace();
}
}
}

Related

Interpreting server response for events correclty

I would like to store values of event properties received from the server in a database. My problems are that in the event consumer:
I cant figure out which eventtype my client received.
I dont know how to map variant indexes to properties without knowing the EventType.
Events come with the property "EventType", which would solve my first problem. But since I am receiving many different event types, I do not know in which variant index it is located. Should I always relocate "EventType" at index 0 in the select clause whenever creating a new EventFilter?
For the second problem, item.getMonitoringFilter().decode(client.getSerializationContext())) offers a view on the property structure but I am not sure how to use it for mapping of variants to properties. Does anybody know how to solve those problems?
Here is the event consumer code that I use. It is taken from milo client examples.
for (UaMonitoredItem monitoredItem: mItems){
monitoredItem.setEventConsumer((item, vs) -> {
LOGGER.info(
"Event Received from: {}", item.getReadValueId().getNodeId());
LOGGER.info(
"getMonitoredItemId: {}", item.getMonitoredItemId());
LOGGER.info(
"getMonitoringFilter: {}", item.getMonitoringFilter().decode(client.getSerializationContext()));
for (int i = 0; i < vs.length; i++) {
LOGGER.info("variant[{}]:, datatype={}, value={}", i, vs[i].getDataType(), vs[i].getValue());
}
});
}
Thank you in advance.
Update:
Seems I have figured it out, by typcasting to EventFilter. Further information such as qName for event properties or event type node IDs can then be derived:
ExtensionObject eObject = item.getMonitoringFilter();
EventFilter eFilter = ((EventFilter) eObject.decode(client.getSerializationContext()));
QualifiedName qName = eFilter.getSelectClauses()[0].getBrowsePath()[0];
LiteralOperand literalOperand = (LiteralOperand) eFilter.getWhereClause().getElements()[0]
.getFilterOperands()[1].decode(client.getSerializationContext());
NodeId eventTypeNodeId = (NodeId) literalOperand.getValue().getValue();
Didn't you supply the filter in the first place when you created the MonitoredItem? Why do you need to "reverse engineer" the filter result to get back to what you did in the first place?
The properties you receive in the event data and the order they come in are defined by the select clause you used when creating the MonitoredItem. If you choose to select the EventId field then it will always be at the same corresponding index.

UaSerializationException: request exceeds remote max message size: 2434140 > 2097152

I am a rookie, I tried to use the following code for bulk subscription, but something went wrong, how can I solve this problem
OpcUaSubscriptionManager subscriptionManager = opcUaClient.getSubscriptionManager();
UaSubscription subscription = subscriptionManager.createSubscription(publishInterval).get();
List<MonitoredItemCreateRequest> itemsToCreate = new ArrayList<>();
for (Tag tag : tagList) {
NodeId nodeId = new NodeId(nameSpace, tag.getPath());
ReadValueId readValueId = new ReadValueId(nodeId, AttributeId.Value.uid(), null, null);
MonitoringParameters parameters = new MonitoringParameters(
subscription.nextClientHandle(), //
publishInterval, //
null, // filter, null means use default
UInteger.valueOf(queueSize), // queue size
true // discard oldest
);
MonitoredItemCreateRequest request = new MonitoredItemCreateRequest(readValueId,
MonitoringMode.Reporting, parameters);
itemsToCreate.add(request);
}
BiConsumer<UaMonitoredItem, Integer> consumer =(item, id) ->
item.setValueConsumer(this::onSubscriptionValue);
List<UaMonitoredItem> items = subscription.createMonitoredItems(
TimestampsToReturn.Both,
itemsToCreate,
consumer
).get();
for (UaMonitoredItem item : items) {
if (!item.getStatusCode().isGood()) {
log.error("failed to create item for nodeId={} (status={})",item.getReadValueId().getNodeId(), item.getStatusCode());
}
}
How many items are you trying to create?
It seems that the resulting message exceeds the limits set by the server you are connecting to. You may need to break your list up and create the items in smaller chunks.
I do not know the library that you use, but one of the previous steps for a OPC UA client to connect to a server is to negotiate the maximum size of the buffers, the message total size and the max number or chunks a message can be sent, this process is called by the OPC UA documentation as "Handshake".
If your request is too long it should be split and sent in several chunks according to the limits previously negotiated with the server.
And the server will probably also reply in several chunks, all that has to be considered in the programming of an OPC UA client.

Do I use provided time or user supplied tag when handling a reflect on a receive order attribute from a time managed federate?

In a simulation using RPR-FOM, if I get a reflectAttributeValues with a LogicalTime time stamp (simulation time) and the OrderType receive order in my FederateAmbassador. For dead reckoning algorithms do I use the time stamp supplied by the RTI or the time stamp encoded in the userSuppliedTag? Using the userSuppliedTag would be decoded value if absolute and system clock if relative.
To clarify, I get attributes reflected specified receive order from a time managed federate in this call in FederateAmbassador from the RTI:
void reflectAttributeValues(ObjectInstanceHandle theObject,
AttributeHandleValueMap theAttributes,
byte[] userSuppliedTag,
OrderType sentOrdering,
TransportationTypeHandle theTransport,
LogicalTime theTime,
OrderType receivedOrdering,
MessageRetractionHandle retractionHandle,
SupplementalReflectInfo reflectInfo)
For attributes that were updated Time Stamp Order, I used the time parameter to know when the attribute last had been updated and simulation time to dead reckon.
public void reflectAttributeValues(
ObjectInstanceHandle objectHandle,
AttributeHandleValueMap attributes,
byte[] userSuppliedTag,
OrderType sentOrdering,
TransportationTypeHandle theTransport,
LogicalTime time,
OrderType receivedOrdering,
MessageRetractionHandle retractionHandle,
SupplementalReflectInfo reflectInfo) {
attributes.forEach((attributeHandle, value) -> {
lastUpdated.put(attributeHandle, time));
timeManaged.add(attributeHandle);
// decode value into your object
...
}
}
For attributes that where updated Receive Order without time stamp, I used the userSuppliedTag to know when the attributed last had been updated (value in the tag for absolute and system clock at the time of receiving the attribute for relative) and then using the system clock to dead reckon.
public void reflectAttributeValues(
ObjectInstanceHandle objectHandle,
AttributeHandleValueMap attributes,
byte[] userSuppliedTag,
OrderType sentOrdering,
TransportationTypeHandle theTransport,
SupplementalReflectInfo reflectInfo) {
LogicalTime time;
if (isRelativeTag(userSuppliedTag)) {
time = factory.createSystemLogicalTime(System.currentTimeMillis());
} else {
time = decodeTag(userSuppliedTag);
}
attributes.forEach((attributeHandle, value)-> {
lastUpdated.put(attributeHandle, time);
timeManaged.remove(attributeHandle); // attributes might switch
// decode value into your objects
...
}
}
Then to dead reckon:
private Vector3D getDeadReckonedWorldLocation(LogicalTime time) {
LogicalTime lastUpdatedSpatial = lastUpdated.get(spatialAttributeHandle);
if (!timeManaged.contains(spatialAttributeHandle)) {
time = factory.createSystemLogicalTime(System.currentTimeMillis());
}
LogicalTimeInterval timeToDeadReckon = time.distance(lastUpdatedSpatial);
return deadReckon(timeToDeadReckon);
}
Code here are simplified examples and may not compile, but they capture the solution I managed to come up with.
Most users of the RPR FOM only use the time in the User Supplied Tag.
The HLA Time Management Services are usually not used any you would never receive a LogicalTime or messages in Time Stamp Order (TSO).
See the Federation Agreement for the RPR FOM, "SISO-STD-001-2015: Standard for Guidance, Rationale, and Interoperability Modalities (GRIM) for the Real-time Platform Reference Federation Object Model (RPR FOM)", for more details: https://www.sisostds.org/DigitalLibrary.aspx?Command=Core_Download&EntryId=30822

how to use couchbase as fifo queue

With Java client, how can I use couchbase to implement FIFO queue, thread safe? There can be many threads popping from the queue, and pushing into the queue. Each object in the queue is a string[].
Couchbase doesn't have any built-in functionality for creating queues, but you can do that by yourself.
I'll explain how to do that in short example below.
I.e. we have queue with name queue and it will have items with names item:<index>. To implement queue you'll need to store your values with key like: <queue_name>:item:<index>, where index will be separate key queue:index, that you need to increment while pushing to queue, and decrement while popping.
In couchbase you could use increment and decrement operations to implement queue, because that operations are atomic and threadsafe.
So code of your push and pop functions will be like:
void push(string queue, string[] value){
int index = couchbase.increment(queue + ':index');
couchbase.set(queue + ':item:' + index, value);
}
string[] pop(string queue){
int index = couchbase.get(queue + ':index');
string[] result = couchbase.get(queue + ':item:' + index);
couchbase.decrement(queue + ':index');
return result;
}
Sorry for code, Ive used java and couchbase java client a long time ago. If now java client have callbacks, like nodejs client, you can rewrite that code to use callbacks. It will be better, I think.
Also you can add additional check into set operation - use add (in C# client it called StoreMode.Add) operation that will throw exception if item with given key has already exists. And you can catch that exception and call push function again for same arguments.
UPD: I'm sorry, it was too early in the morning, so I couldn't think clear.
For fifo, as #avsej said you'll need two counters: queue:head and queue:tail. So for fifo:
void push(string queue, string[] value){
int index = couchbase.increment(queue + ':tail');
couchbase.set(queue + ':item:' + index, value);
}
string[] pop(string queue){
int index = couchbase.increment(queue + ':head') - 1;
string[] result = couchbase.get(queue + ':item:' + index);
return result;
}
Note: code can look slightly different depending on start values of queue:tail and queue:head (will it be zero or one or something else).
Also you can set some max value for counters, after reaching it, queue:tail and queue:head will be reseted to 0 (just to limit number of documents). Also you can set expire value to each document, if you actually need this.
Couchbase already CouchbaseQueue data structure.
Example usage: taken from the below SDK documentation
Queue<String> shoppingList = new CouchbaseQueue<String>("queueDocId", collection, String.class, QueueOptions.queueOptions());
shoppingList.add("loaf of bread");
shoppingList.add("container of milk");
shoppingList.add("stick of butter");
// What does the JSON document look like?
System.out.println(collection.get("queueDocId").contentAsArray());
//=> ["stick of butter","container of milk","loaf of bread"]
String item;
while ((item = shoppingList.poll()) != null) {
System.out.println(item);
// => loaf of bread
// => container of milk
// => stick of butter
}
// What does the JSON document look like after draining the queue?
System.out.println(collection.get("queueDocId").contentAsArray());
//=> []
Java SDK 3.1 CouchbaseQueue Doc

how to see properties of a JmDNS service in reciever side?

One way of creating JmDNS services is :
ServiceInfo.create(type, name, port, weight, priority, props);
where props is a Map which describes some propeties of the service. Does anybody have an example illustrating the use of theese properties, for instance how to use them in the reciever part.
I've tried :
Hashtable<String,String> settings = new Hashtable<String,String>();
settings.put("host", "hhgh");
settings.put("web_port", "hdhr");
settings.put("secure_web_port", "dfhdyhdh");
ServiceInfo info = ServiceInfo.create("_workstation._tcp.local.", "service6", 80, 0, 0, true, settings);
but, then in a machine receiving this service, what can I do to see those properties?
I would apreciate any help...
ServiceInfo info = jmDNS.getServiceInfo(serviceEvent.getType(), serviceEvent.getName());
Enumeration<String> ps = info.getPropertyNames();
while (ps.hasMoreElements()) {
String key = ps.nextElement();
String value = info.getPropertyString(key);
System.out.println(key + " " + value);
}
It has been a while since this was asked but I had the same question. One problem with the original question is that the host and ports should not be put into the text field, and in this case there should actually be two service types one secure and one insecure (or perhaps make use of subtypes).
Here is an incomplete example that gets a list of running workstation services:
ServiceInfo[] serviceInfoList = jmdns.list("_workstation._tcp.local.");
if(serviceInfoList != null) {
for (int index = 0; index < serviceInfoList.length; index++) {
int port = serviceInfoList[index].getPort();
int priority = serviceInfoList[index].getPriority();
int weight = serviceInfoList[index].getWeight();
InetAddress address = serviceInfoList[index].getInetAddresses()[0];
String someProperty = serviceInfoList[index].getPropertyString("someproperty");
// Build a UI or use some logic to decide if this service provider is the
// one you want to use based on prority, properties, etc.
...
}
}
Due to the way that JmDNS is implemented the first call to list() on a given type is slow (several seconds) but subsequent calls will be pretty fast. Providers of services can change the properties by calling info.setText(settings) and the changes will be propagated out to the listeners automatically.