How to get P&L on a trade through Interactive Brokers TWS Java API - forex

Is there any way to get Profit and Loss(daily & total till date) on a particular trade made on IB TWS through its Java API?

You can, but not in the way you seem to be asking. All profit and loss in the API is calculated by you until the trade is closed and then you can use the commissionReport method of the wrapper. A commissionReport is sent after every execDetails. API doc
You can always check your statements for previous profits and losses.
The flow is like this.
place trade and get fill price from execDetails
get opening commission from commissionReport
on every tick calculate open position profit, use bid/ask for realism, but it's all forex has anyway
close trade and get price from execDetails
get commission from commissionReport again
calculate closed trade profit/loss
also note that commissionReport has a field m_realizedPNL you can use, but I've never tried it.

In the TWS v9.72+ API there is a reqPnl method on the EClient which can be used to subscribe to real-time PnL (unrealized and realized) updates for a full portfolio via the associated method on the EWrapper
https://interactivebrokers.github.io/tws-api/classIBApi_1_1EClient.html#a0351f22a77b5ba0c0243122baf72fa45
Additionally, for a single contractID, you can use: reqPnLSingle on the Client.
https://interactivebrokers.github.io/tws-api/interfaceIBApi_1_1EWrapper.html#aebeb008f2b763d7bed2969b66bbd1b33

you may presubmit the order, to see all calculations, like the commission and margin impact of the order.
to do that, set whatIf=True in the order definition.
you'll then receive openOrder events, with all the calculations made for you.

Related

JMeter to record results on hourly basis

I have a JMeter project with multiple GET and POST requests and assertions for these. I use Aggregate results and View results tree listeners, but none of these can store results on hourly basis. I tried JMeterPlugins-Standard and JMeterPlugins-Extras packages and jp#gc - Graphs Generator listener, but all of them use aggregated data instead of hourly data. So I would like to get number of successful and failed requests/assertions per hour, maybe a bar chart would be most suitable for this purpose.
I'm going to suggest a non-conventional design-level solution: name your samplers dynamically with hour (or date and hour), so that each hour the name will change, and thus they will appear in different category, i.e.:
The code for such name is:
${__time(dd:hh,)} the rest of sampler name
Such sampler will appear in the following way in Aggregate Report (here I simulated it with minutes/seconds, but same will happen with days/hours, just on larger scale):
Pros and cons of such approach:
Simple, you can aggregate anything by hour, minute, or any other time slice while test is running, and not by analysis after execution.
Not listener-dependant, can be used with pretty much any listener or visualizer
If you want to also have overall stats, it will require to sum up every sub-category. So it alters data, but in the way that it can still can be added back to original relatively easy.
Calculating __time before every sampler will not be unnoticed completely from performance perspective, but I don't think it will add visible overhead to a script.
You could get the same data by properly aggregating JTL or CSV (whichever you use) after execution, so it doesn't provide you with anything that is not possible to achieve using standard methods
Script needs altering to make this happen. if you have 100s of samplers, it's going to take a while. And if you want to change back...
You might want to use Filter Results Tool which has --start-offset and --end-offset parameters, you can "cut" your results file into "interesting" pieces and plot them according to your requirements.
You can install Filter Results Tool using JMeter Plugins Manager
Also be aware that according to JMeter Best Practices you should
Use as few Listeners as possible; if using the -l flag as above they can all be deleted or disabled.
Don't use "View Results Tree" or "View Results in Table" listeners during the load test, use them only during scripting phase to debug your scripts.
You can get whatever information you need from the .jtl results file, you can specify test results location via -l command-line argument
To get summarized results per hour add to your test plan Generate Summary Results:
Generates a summary of the test run so far to the log file and/or standard output
Update interval in jmeter.properties to your needs ,1 hour, 3600 seconds:
summariser.interval=3600
You will get summary per hour of your requests.
You can try with Jmeter backend Listener. It has integration with graphite and Influxdb. After storing the results in these time series database you can display the result in Grafana dashboard. Grafana has its own filtering of showing the results in hourly, monthly, daily basis and so on.

Yahoo Weather API current conditions

I am trying to get the current weather conditions for a certain location, but for some reason I always get the conditions for a semi-random day/time (more or less past week). I am using this query: https://query.yahooapis.com/v1/public/yql?q=select%20item.condition%20from%20weather.forecast%20where%20woeid%20%3D%20639660%20AND%20u%3D%22c%22&format=json&env=store%3A%2F%2Fdatatables.org%2Falltableswithkeys&callback=
Even trying out the example query on their website doesn't work for me. When I click on "Current conditions for San Diego, CA" I get the same, random results. Is there any way to get the current conditions?
I am seeing the same thing. It has to be on Yahoo's side. Unfortunately it looks like their support page is down too. Returned a 404
I've also notice the random ( 1 day up to 2 wks) of old data being transmitted
http://xml.weather.yahoo.com/forecastrss/55364_f.xml It seems to change about every 30 seconds

Reactive systems - Reacting to time passing

Let's say we have a reactive sales forecasting system.
Every time we make a sale we re-calculate our Forecast for future sales.
This works beautifully if there are lots of sales triggering our re-forecasting.
What happens however if sales go from 100 events per second, to 0. And stay 0 for a long time?
The forecast we published back when sales were good stays being the most up to date forecast.
How would you model in this situation an event that represents 'No sales happening' without falling back to some batch hourly/minutely/arbitrary time segment event that says 'X time has passed'.
This is a specific case of a generic question - How do you model time passing with nothing happening in an event based system - without using a ticking clock style event which would wake everyone up to reconsider their current values [an implementation which would not scale].
The only option I have considered that makes sense:
Every time we take a sale, we also schedule a deferred event 2 hours in the future that asks us to reconsider our assessment of that sale.
In handling that deferred event we may then choose to schedule further deferred events for re-considering.
Considering this is a very generic scenario, you've made a rather large assumption that it's not possible to come up with a design for re-evaluating past sales in a scalable way unless it's done one sale at a time.
There are many different scale related numbers in the scenario, and you're only looking at the one whereby a single scheduled forecast updater may attempt to process a very large number of past sales at the same time.
Other scalability issues I can think of:
Reevaluating the forecast for every single new sale doesn't sound great if you're expecting 100s of sales per second. If you're talking about a financial forecasting model for accounting, it's unlikely it needs to be updated every single time the organisation makes a sale, if the organisation is making hundreds of sales a second.
If you're talking about a short term predictive engine to be used for financial markets (ie predicting how much cash you'll need in the next 10 seconds, or energy, or other resources), then it sounds like you have constant volatility and you're not really likely to have a situation where nothing happens for hours. And if you do need forecasts updated very frequently, waiting a couple of hours before triggering a re-update is not likely to get you the kind of information you need in the way you need it.
With your approach, you will end up with one future scheduled event per product (which could be large), and every time you make a sale, you'll be dropping the old scheduled event and scheduling a new one. So for frequently selling products, you'll be doing repetitive work to constantly kick the can down the road a bit further, when you're not likely to ever get there.
What constitutes a good design is going to be based on the real scenario. The generic case is interesting to think about, but good designs need to be shaped to their circumstances.
Here are a few ideas I have that might be appropriate:
If you want an updated forecast per product when that product has a sale, but some products can sell very frequently, then a good approach may be to throttle or buffer the sales on a per product basis. If a product is selling 50 times a second, you can probably afford to wait 1 second, 10 seconds, 2 hours, whatever and evaluate all those sales at once, rather than re-forecasting 50 times a second. Especially if your forecasting process is heavy, doing it for every sale is likely to cause high load for low value, as the information will be outdated almost straight away by the next sale.
You could also have a generic timer that updates forecasts for all products that haven't sold in the last window, but handle the products in buffers. For example, every hour you could pick the 10 products with the oldest forecasts and update them. This prevents the single timer from taking on re-forecasting the entire product set in one hit.
You could use only the single timer approach above and forget the forecast updates on every sale if you want something dead simple.
If you're worried about load from batch forecasting, this kind of work should be done on different hardware from the ones handling sales.

Why is my CallFire phone number not available after I've placed an order via API

I have a scala client that talk to the CallFire API. I can't find anything in the documentation about having a phone number be immediately available (accept phone calls) after placing an order from the API. Here is the specific line I use: https://github.com/oGLOWo/callfire-scala-client/blob/master/src/main/scala/com/oglowo/callfire/Client.scala#L166
I need these numbers to be available when my customers purchase them. Are there any parameters that I don't know about or something that I'm doing wrong that is causing the numbers to not pick up for several minutes?
Number purchases can take several minutes to fulfil as the order is processed by the upstream number provider, which can vary according to the region and number type. As such, this is necessarily an asynchronous process.
My suggestion would be that after you create the number order, each time that it is necessary to know the status of the number you purchased, you can invoke the GetNumber operation to get status information for that number.
The most relevant field for your purposes would be the "Status" field, which indicates where in the number fulfilment process that number is. Once the status has transitioned to "Active", your number should be fully available.
Additionally, you can look at the "CallFeature" and "TextFeature" fields, in the NumberConfiguraton section of the Number resource, to see whether the number has confirmed call or text service yet, respectively.
Alternatively, you can also invoke the GetNumberOrder operation to get the status of your order. This will give you information on the status of the number order itself, but in my opinion is less useful for your purposes than querying the number status directly.
It is also worth mentioning that there are cases where the number is technically being serviced, but CallFire's number inventory hasn't yet been updated to indicate this. This can be pushed along by creating inbound traffic to the number on each of the features. That is, you might have a number which "activates" the numbers you purchase more rapidly by sending a call or text to them. This is due to the slight delay between the number being configured upstream, and CallFire's systems being notified of that fact. By sending traffic to the number, you more rapidly give CallFire's systems feedback that the number is enabled. This can save you up to a couple of minutes, if time is of the essence.
Your question has prompted me to create a feature request for CallFire internally, to add an event type to CreateSubscription for when number orders transition between statuses. This way, you could avoid having to poll for number/order statuses repeatedly, and instead we would notify your server by HTTP POST when the number order transitions to finished.

Websphere Commerce Maximum Order Quantity Per Order Change to All Order history and pending

I have here a Websphere commerce 7 Fp 5 Aurora B2B which is using Orgs, contracts and price lists maximum order quantity that we limit each "Store" Org to buy 3 each so that there is enough to go around. We have 3 sets of entitlements to that most guys Max is 3, better stores max is 5 and a few really good ones max qty is 10.
So we don't have to worry about allocation, these rules let every store buy based on their entitlement. When they try to put more in the cart then they should, they get a message "You are requesting to order more than your allocation limit. Please change your requested quantity." I don't know where this comes from.
Some users buy for 5 or more stores which is selected at checkout during payment. This keeps those store owners from having to have a bunch of log ins to keep track of.
We recently openned up Order Management, we call it multi-cart because this enables a store owner to create more than 1 cart by going to order management and create a new order. This makes it much easier for our store owners to manage what they are buying and paying and receiving with out having to call and email our CSR teams.
But now we noticed that some stores are taking advantage of the Multi-cart to buy more that the MAX qty they are allowed. It wouldn't be so bad, but they are buying all the 1 per customer stuff and all the other stores are calling and complaining because they didn't get their share. It really isn't fair.
I was thinking of all the different places to add a SQL check of order history and pending orders. Here is what I cam up with.
ATP - Inventory check
Pro- best place since customer, sku, entitlement, and everything else pretty much happens here. It is right up front.
Con- It doens't have ship-to, so the guys with more than 1 store need to be added as an exception leading sloppy business logic that could change regularly
OrderItemAddCmdImpl and overloading ExtendOrderItemProcessCmd
Pro - Bring ship-to selection up to the front and control everything here.
Con - Not certain overhead will like it.
At checkout
Pro - this also will have everything
Con - I kind of want to reserve this for all payment handling. It is a little dirty to read throught the order lines and to kick back error with SKU.
Have the ERP handle the exception-
Con - I realized we are setup that all Orders ship complete, we would have to change this and don't really want to because there are additional credit card penalties for charges less then amount held on credit.
So, the questions are what are your thoughts on additional pros and cons?
Are there other places that I'm missing that would make more sense?
We might need to create a new CommandTask and then append it to all existing flows.
OR
As the WebSphere Commerce team to build this new logic into the next release of WebSphere Commerce.
So, for this scenario the answer was creating a JDBC Query that will be called when items are added to cart that sums the Qty by sku of pending orders.
When orderitems are added to the cart, the query will return the item in error if exceeded and not add item.
The query will again run at checkout if all orderitems being paid for exceed total qty max by SKU by shipping address on payment page allowing chance for customer to change billing address if we are limiting item max qty by shipping address.