softlayer bandwidth pool delete - ibm-cloud

is there any method to delete a record for bandwidth pools??
for the api- SoftLayer_Network_Bandwidth_Version1_Allotment

The method to delete a bandwidth pool is “requestVdrCancellation” of the service “SoftLayer_Network_Bandwidth_Version1_Allotment”.
You can use this rest api:
Method: GET
https://[username]:[apiKey]#api.softlayer.com/rest/v3/SoftLayer_Network_Bandwidth_Version1_Allotment/[bandwidthPoolId]/requestVdrCancellation

Related

Handling multiple requests with same body - REST API

Let's say I have a micro service which just registers a user into the database and we expose it to our client. I want to understand what's the better way of handling the following scenario,
What if the user sends multiple requests in parallel(say 10 requests within the 1 second) with same request body. Should I keep the requests in a queue and register for the very first user and deny all the other 9 requests, or should I classify each request and compare whichever having similar request body and if any of them has different request body shall be picked up one each and rest are rejected? or What's the best thing I can do to handle this scenario?
One more thing I would like to understand, is it recommended to have rate-limiting (say n requests per minute) on a global API level or micro-service level?
Thanks in advance!
The best way is to use an idempotent call. Instead of exposing an endpoint like this :
POST /users + payload
Expose an endpoint like this :
PUT /user/ID + payload
You let the caller generate the id, and you ask for an UUID. With UUID, no matter who generates it. This way, if caller invokes your endpoint multiple times, the first time you will create the user, the following times you will juste update the user with the same payload, which means you'll do nothing. At least you won't generate duplicates.
It's always a good practice to protect your services with rate-limiting. You have to set it at API level. If you define it at microservice level, you will authorize N times the rate if you have N instances, because you will ditribute the requests.

How to perform multiple HTTP DELETE operation on same Resource with different IDs in JMeter?

I have a question regarding **writing test for HTTP DELETE method in JMeter using Concurrency Thread Group**. I want to measure **how many DELETEs** can it perform in certain amount of time for certain amount of Users (i.e. Threads) who are sending Concurrent HTTP (DELETE) Requests.
Concurrency Thread Group parameters are:
Target Concurrency: 50 (Threads)
RampUp Time: 10 secs
RampUp Steps Count: 5 secs
Hold Target Rate Time (sec): 5 secs
Threads Iterations Limit: infinite
The thing is that HTTP DELETE is idempotent operation i.e. if inovked on same resource (i.e. Record in database) it kind of doesn't make much sense. How can I achieve deletion of multiple EXISTING records in database by passing Entity's ID in URL? E.g.:
http://localhost:8080/api/authors/{id}
...where ID is being incremented for each User (i.e. Thread)?
My question is how can I automate deletion of multiple EXISTING rows in database (Postgres 11.8)...should I write some sort of script or is there other easier way to achieve that?
But again I guess it will probably perform multiple times same thing on same resources ID (e.g. HTTP DELETE will be invoked more than once on http://localhost:8080/api/authors/5).
Any help/advice is greatly appreciated.
P.S. I'm doing this to performance test my SpringBoot, Vert.X and Dropwizard RESTful Web service apps.
UPDATE1:
Sorry, I've didn't fully specify reason for writing these Test Use Case for my Web Service apps which communicate with Postgres DB. MAIN reason why I'm actually doing this testing is to test PERFORMANCES of blocking and NON-blocking WEB Server implementations for mentioned frameworks (SpringBoot, Dropwizard and Vert.X). Web servers are:
Blocking impelementations:
1.1. Apache Tomcat (SpringBoot)
1.2. Jetty (Dropwizard)
Non-blocking: Vert.X (uses own implementation based on Netty)
If I am using JMeter's JDBC Request in my Test Plan won't that actually slow down Test execution?
The easiest way is using either Counter config element or __counter() function in order to generate an incrementing number on each API hit:
More information: How to Use a Counter in a JMeter Test
Also the list of IDs can be obtained from the Postgres database via JDBC Request sampler and iterated using ForEach Controller

redis key update notification in rest api

I would like to utilize key update notification mechanism of redis in a Http based rest api implemented in java.
Once a request is received in http rest api, it publishes details to be handled by an async process and waits for associated unique key notification from redis.
Async process after computation will create an entry in redis db with same unique key.
rest api receives unique key notification and replies back with http response.
Is this possible with redis, or there is a better option to get notified inside http request/reply implementation?
This approach is fine as long as you make sure your async process that receives the message and provides the result is fast enough to not exceed any configured request timeouts (especially under foreseen load). When you cannot guarantee that - you can consider using polling strategy:
return an async job identifier and let the client ask for its result, or
define a timeout for the async job to complete - if it provides the result in that time - return it, otherwise return async job identifier as above.

Pessimistic locking mechanism with IReliableQueue in Azure Service Fabric

I understand locking is scoped per transaction for IReliableQueue in Service Fabric. I have a requirement where once the data is read from the ReliableQueue within a transaction, I need to pass the data back to my client and preserve the lock on that data for a certain duration and if the processing fails in client, then write the data back to queue (preferably at the head so that it is picked first in next iteration).
Service Fabric doesn't support this. I recommend you look into using an external queuing mechanism for this. For example, Azure Service Bus Queues provides the functionality you describe.
You can use this package to receive SB messages within your services.
preserve the lock on that data for a certain duration
We made that once or twice too in other contexts with success using modifiable-lists and a document-field LockedUntillUtc (initialized to mininimum or null, or using a different reliable collection of locked keys (sorted on LockedUntillUtc?) - which best suites your needs?).
If you can't trust your clients to adhere to such a lock-request and write/un-lock-request contract, consider an ETag pattern - only returned on a successfull lock-request...

How do I monitor daily API transaction usage for the Alchemy API service on Bluemix?

As I use the Alchemy API service on Bluemix, I see the daily-transaction-limit-exceeded message. How can I monitor my transaction usage to determine when I am approaching the limit?
Each API call typically equals many transactions. In the JSON response, you should see a transaction count returned for every API response that you receive from the server. However, you can determine the number of daily transactions that remain using the following query:
curl -i http://access.alchemyapi.com/calls/info/GetAPIKeyInfo?apikey=<api_key>
Replace the <api_key> variable with your own API key. In the XML that is returned, you will receive a count of your daily usage plus the transaction limit.