Inconsistency in Amount due data in NetSuite API - rest

I am trying to pull the Amount remaining data from Invoices in NetSuite.
The problem is the data is very inconsistent.
First of all, they do not provide the Amount remaining in the Invoice's currency. Looks like they only provide it in USD. So I tried to convert to the target currency with exchange rate and it works fine 75% of the time.
Conversion: Amount due in Foreign currency = Amount due in API reponse/exchange rate
SAMPLE RESPONSE 1:
{
Invoices: [
{
amountRemaining: 6096.9,
currency: 2, <-- (GBP)
exchangeRate: 1.354865,
fxAmount: 4500,
status: open,
customFieldList: { },
xmlns:platform_common: urn:common_2017_2.platform.webservices.netsuite.com,
custom_segments: { },
custom_fields: { }
}
],
}
In sample 2, the exchange rate had to be around 1.3 but it is wrongly set as 0.76. However, the good thing is the Amount remaining is calculated according to the exchange rate and so the conversion with the above formula would still give me the correct amount due.
SAMPLE RESPONSE 2:
{
Invoices: [
{
amountRemaining: 58538.49,
currency: 2, <-- (GBP)
exchangeRate: 0.76009212,
fxAmount: 77015,
status: open,
customFieldList: { },
xmlns:platform_common: urn:common_2017_2.platform.webservices.netsuite.com,
custom_segments: { },
custom_fields: { }
}
],
}
However, in sample 3, the exchange rate is wrongly set as 1. And the Amount due should have been 21523.32. However, it is around 30000. Not sure how this information is calculated.
SAMPLE RESPONSE 3:
{
Invoices: [
{
amountRemaining: 30132.648,
currency: 2, <-- (GBP)
exchangeRate: 1,
fxAmount: 21523.32,
status: open,
customFieldList: { },
xmlns:platform_common: urn:common_2017_2.platform.webservices.netsuite.com,
custom_segments: { },
custom_fields: { }
}
],
}
How is the Amount due calculated? Am I looking at the wrong fields for the conversion?
Thanks in advance!

Related

How to setup Paypal subscriptions with multiple currencies?

For each product we do the following:
When a new product is added, create a new Product with POST /v1/catalogs/products
Using the product_id from step 1, create a new plan POST /v1/billing/plans
Whenever a customer clicks "Subscribe" button, we create a new subscription using plan_id from step 2 with POST /v1/billing/subscriptions
Problem: When creating the subscription we are able to change the price the customer will be billed by passing the plan object to POST /v1/billing/subscriptions endpoint to override the amount of the plan. However passing in a different currency throws an error:
"The currency code is different from the plan's currency code."
With that being said, is there a way to setup paypal subscriptions where we can pass in a different currency? Is it required to create a new plan for each currency because this does not seem like a good solution
We create a billing plan with the following body:
{
product_id: productId,
name: planName,
status: 'ACTIVE',
billing_cycles: [
{
frequency: {
interval_unit: 'MONTH',
interval_count: 1
},
tenure_type: 'REGULAR',
sequence: 1,
// Create a temporary pricing_scheme. This will be replaced
// with a variable amount when a subscription is created.
pricing_scheme: {
fixed_price: {
value: '1',
currency_code: 'CAD'
}
},
total_cycles: 0
}
],
payment_preferences: {
auto_bill_outstanding: true,
payment_failure_threshold: 2
}
}
We create subscription with the following body (however passing in a different currency than the plan (CAD) throws an error):
{
plan_id: planId,
subscriber: {
email_address: email
},
plan: {
billing_cycles: [
{
sequence: 1,
pricing_scheme: {
fixed_price: {
value: amount,
currency_code: currency
}
}
}
]
},
custom_id: productId
};
Is it required to create a new plan for each currency
Yes

Building a REST API for a dashboard with charts

I'm coming today for your help because i'm working on a project which requires the implementation of a REST API that query aprox 80million of indicators (documents) in a MongoDB Collection. Each indicator follow this "schema":
{
indicator_type: string,
indicator_name: string,
entityId: string,
date: date,
stringDate: string,
value: double,
}
Actually there is an API built in JAVA but it consumes a lot of CPU, memory and sometimes there are timeouts or the responses take a lot of time, that's the reason with need to remake it. So my questions are this ones:
How bad is saving the indicators like this and are there some patterns to save this kind of data?
Which programming language can you recommend to develop this kind of endpoints?
We think much of our problem is about the database, so we are thinking to move to Google BigQuery. Can BigQuery help to get fasters responses?
If BigQuery is not a great answer which other tools can you recommend to this use case?
What we are trying to achieve are responses like this one.
{
"totalConversions": {
"visitorPedestrianAverageConversion":5.847142857142858,
"ticketVisitorAverageConversion":0
},
"series":[
{
"data":[126,124,100,111,74,99,141],
"id":"indicator_type",
"type":"spline"
},
{
"data": [1925,2377,1873,1769,1067,2460,2139],
"id":"indicator_type",
"type":"spline"
},
{
"data":[0,0,0,0,0,0,0],
"id":"indicator_type",
"type":"spline"
},
{
"yAxis":1,
"data":[0,0,0,0,0,0,0],
"id":"indicator_type",
"type":"column"
},
{
"data":[0,0,0,0,0,0,0],
"id":"indicator_type",
"type":"spline"
},
{
"yAxis":2,
"data":[0,0,0,0,0,0,0],
"id":"indicator_type",
"type":"scatter"
},
{
"yAxis":2,
"data":[6.55,5.22,5.34,6.27,6.94,4.02,6.59],
"id":"indicator_type",
"type":"scatter"
},
{
"yAxis":2,
"data":[100,100,100,100,100,100,100],
"id":"indicator_type",
"type":"spline"
}
],
"categories":["Lun 02/03/2020","Mar 03/03/2020","Mié 04/03/2020","Jue 05/03/2020","Vie 06/03/2020","Sáb 07/03/2020","Dom 08/03/2020"]
}

How can I get branch count on a repository via GitHub API?

I'm working on a UI which lists all repositories of a given user or organization. This is using a tree format, where the first level is the repositories, and the second level of hierarchy (child nodes) are to be each branch, if expanded.
I'm using a mechanism that deliberately doesn't require me to pull a list of all branches of a given repo, because the API has rate limits on API calls. Instead, all I have to do is instruct it how many child nodes it contains, without actually assigning values to them (until the moment the user expands it). I was almost sure that fetching a list of repos includes branch count in the result, but to my disappointment, I don't see it. I can only see count of forks, stargazers, watchers, issues, etc. Everything except branch count.
The intention of the UI is that it will know in advance the number of branches to populate the child nodes, but not actually fetch them until after user has expanded the parent node - thus immediately showing empty placeholders for each branch, followed by asynchronous loading of the actual branches to populate. Again, since I need to avoid too many API calls. As user scrolls, it will use pagination to fetch only the page(s) it needs to show to the user, and keep it cached for later display.
Specifically, I'm using the Virtual TreeView for Delphi:
procedure TfrmMain.LstInitChildren(Sender: TBaseVirtualTree; Node: PVirtualNode;
var ChildCount: Cardinal);
var
L: Integer;
R: TGitHubRepo;
begin
L:= Lst.GetNodeLevel(Node);
case L of
0: begin
//TODO: Return number of branches...
R:= TGitHubRepo(Lst.GetNodeData(Node));
ChildCount:= R.I['branch_count']; //TODO: There is no such thing!!!
end;
1: ChildCount:= 0; //Branches have no further child nodes
end;
end;
Is there something I'm missing that allows me to get repo branch count without having to fetch a complete list of all of them up-front?
You can use the new GraphQL API instead. This allows you to tailor your queries and results to just what you need. Rather than grabbing the count and then later filling in the branches, you can do both in one query.
Try out the Query Explorer.
query {
repository(owner: "octocat", name: "Hello-World") {
refs(first: 100, refPrefix:"refs/heads/") {
totalCount
nodes {
name
}
},
pullRequests(states:[OPEN]) {
totalCount
}
}
}
{
"data": {
"repository": {
"refs": {
"totalCount": 3,
"nodes": [
{
"name": "master"
},
{
"name": "octocat-patch-1"
},
{
"name": "test"
}
]
},
"pullRequests": {
"totalCount": 192
}
}
}
}
Pagination is done with cursors. First you get the first page, up to 100 at a time, but we're using just 2 here for brevity. The response will contain a unique cursor.
{
repository(owner: "octocat", name: "Hello-World") {
pullRequests(first:2, states: [OPEN]) {
edges {
node {
title
}
cursor
}
}
}
}
{
"data": {
"repository": {
"pullRequests": {
"edges": [
{
"node": {
"title": "Update README"
},
"cursor": "Y3Vyc29yOnYyOpHOABRYHg=="
},
{
"node": {
"title": "Just a pull request test"
},
"cursor": "Y3Vyc29yOnYyOpHOABR2bQ=="
}
]
}
}
}
}
You can then ask for more elements after the cursor. This will get the next 2 elements.
{
repository(owner: "octocat", name: "Hello-World") {
pullRequests(first:2, after: "Y3Vyc29yOnYyOpHOABR2bQ==", states: [OPEN]) {
edges {
node {
title
}
cursor
}
}
}
}
Queries can be written like functions and passed arguments. The arguments are sent in a separate bit of JSON. This allows the query to be a simple unchanging string.
This query does the same thing as before.
query NextPullRequestPage($pullRequestCursor:String) {
repository(owner: "octocat", name: "Hello-World") {
pullRequests(first:2, after: $pullRequestCursor, states: [OPEN]) {
edges {
node {
title
}
cursor
}
}
}
}
{
"pullRequestCursor": "Y3Vyc29yOnYyOpHOABR2bQ=="
}
{ "pullRequestCursor": null } will fetch the first page.
Its rate limit calculations are more complex than the REST API. Instead of calls per hour, you get 5000 points per hour. Each query costs a certain number of points which roughly correspond to how much it costs Github to compute the results. You can find out how much a query costs by asking for its rateLimit information. If you pass it dryRun: true it will just tell you the cost without running the query.
{
rateLimit(dryRun:true) {
limit
cost
remaining
resetAt
}
repository(owner: "octocat", name: "Hello-World") {
refs(first: 100, refPrefix: "refs/heads/") {
totalCount
nodes {
name
}
}
pullRequests(states: [OPEN]) {
totalCount
}
}
}
{
"data": {
"rateLimit": {
"limit": 5000,
"cost": 1,
"remaining": 4979,
"resetAt": "2019-08-21T05:13:56Z"
}
}
}
This query costs just one point. I have 4979 points remaining and I'll get my rate limit reset at 05:13 UTC.
The GraphQL API is extremely flexible. You should be able to do more with it using less Github resources and less programming to work around rate limits.

REST API design for data synchronization service

What is best practice for data synchronization operation between client and server?
We have 2 (or more) resources:
cars -> year, model, engine
toys -> color, brand, weight
And we need to get updated resources from server in case of any updates on them. For example: someone made changes from another client on the same data and we need to transfer those updates to our client application.
Request:
http://api.example.com/sync?data=cars,toys (verb?)
http://api.example.com/synchronizations?data=cars,toys (virtual resource "synchronizations")
Response with mixed data:
status code: 200
{
message: "ok",
data: {
cars: [
{
year: 2015,
model: "Fiat 500"
engine: 0.9
},
{
year: 2004,
model: "Nissan Sunny"
engine: 1.3
}
],
toys: [
{
color: "yellow",
brand: "Bruder"
weight: 2
}
],
}
}
or response with status code 204 if no updates available. In my opinion making separated http calls in not a good solution. What if we have 100 resources (=100 http calls)?
I am not an expert, but one method I have used in the past is to ask for a "signature" of the data, as opposed to always going and getting the data. The signature can be a hash of the data you are looking for. So, flow would be something like:
Get signature hash of the data
http://api.example.com/sync/signature/cars
Which returns the signature hash
Check if the signature is different from the last time you retrieved the data
If the signature is different, go and get the data
http://api.example.com/sync/cars
Have the REST also add the new signature to the data
{
message: "ok",
data: {
cars: [
{
year: 2015,
model: "Fiat 500"
engine: 0.9
},
{
year: 2004,
model: "Nissan Sunny"
engine: 1.3
},
],
signature: "570a90bfbf8c7eab5dc5d4e26832d5b1"
}
}

Meteor Reactive Data Query for Comments with Usernames and Pictures

I am trying to implement a commenting system in a huge app and always run in the problem about cross reactiveness and publications.
The specific problem:
When a user writes a comment, I want to show the user's name and a profile picture. The comments are in one collection, the names and pictures in another.
When I make a subscription for every comment on this page and for every user whose id is in a comment of this page serversided, the app does not update the users available on the client when a new comment is added because "joins" are nonteactive on the server.
When I do that on the client, i have to unsubscribe and resubscribe all the time, a new comment is added and the load gets higher.
what is the best practise of implementing such a system in meteor? how can i get around that problem without a huge overpublishing?
As there is not official support for joins yet,among all the solutions out there in community
I found https://github.com/englue/meteor-publish-composite this package very helpful and I'm using it in my app.
This example perfectly suits your requirement https://github.com/englue/meteor-publish-composite#example-1-a-publication-that-takes-no-arguments
Meteor.publishComposite('topTenPosts', {
find: function() {
// Find top ten highest scoring posts
return Posts.find({}, { sort: { score: -1 }, limit: 10 });
},
children: [
{
find: function(post) {
// Find post author. Even though we only want to return
// one record here, we use "find" instead of "findOne"
// since this function should return a cursor.
return Meteor.users.find(
{ _id: post.authorId },
{ limit: 1, fields: { profile: 1 } });
}
},
{
find: function(post) {
// Find top two comments on post
return Comments.find(
{ postId: post._id },
{ sort: { score: -1 }, limit: 2 });
},
children: [
{
find: function(comment, post) {
// Find user that authored comment.
return Meteor.users.find(
{ _id: comment.authorId },
{ limit: 1, fields: { profile: 1 } });
}
}
]
}
]
});
//client
Meteor.subscribe('topTenPosts');
and the main thing is it is reactive