After internal sales order created, I want to receive that material through backend means to create receipt after delivery number generated.
I have tried this: First insert the data into rcv_header_interface,then into rcv_transaction_interface then using Receiving Transaction Processor standard program, but it doesn't work.
I am using oracleapps version r12 12.2.4.
Related
We are currently incorporating a FIX engine (using QuickFixJ) in our application. We will be the initiator and use trade capture reports to get informed on all trades happening on the platform.
When persisting the trade capture reports, we first buffer them for performance reasons and later insert them all at once on the database in a single transaction. We are using the JdbcStore to persist the sent FIX messages on the database as we cannot rely on the hard disk. However, we do not want the JdbcStore to persist the session information (target sequence number, sender sequence number etc) because this would open a new transaction for every single message we receive (which we want to avoid due to performance reasons). Instead, we manually save the last seen and sent sequence numbers.
I have not found a configuration of QuickFixJ which would allow this. If we create a JdbcStore using the JdbcStoreFactory, it expects a table on the database to store session information. Is there any way to configure QuickFixJ to only persist the sent messages, but not the session information?
This is still a theory in my mind.
I'm rebuilding my backend by splitting things into microservices. The microservices I'm imagining for starting off are:
- Order (stores order details and status of each order)
- Customer (stores customer details, addresses, orders booked)
- Service Provider (stores service provider details, status & location of each service provider, order(s) currently being processed by the service provider, etc.)
- Payment (stores payment info for each order)
- Channel (communicates with customers via email / SMS / mobile push)
I hope to be able to use PUB/SUB to create a message with corresponding data, which can be used by any other microservice subscribing to that message.
First off, I understand the concept that each microservice should have complete code & data isolation (thus, on different instances / VMs); and that all microservices should communicate strictly using HTTP REST API contracts.
My doubts are as follows:
To show a list of orders, I'll be using the Order DB to get all orders. In each Order document (I'll be using MongoDB for storage), I'll be having a customer_id Foreign Key. Now the issue of resolving customer_name by using customer_id.
If I need to show 100 orders on the page and go with the assumption that each order has a unique customer_id associated with it, then will I need to do a REST API call 100 times so as to get the names of all the 100 customer_ids?
Or, is data replication a good solution for this problem?
I am envisioning something like this w.r.t. PUB/SUB: The business center personnel mark an order as assigned & select the service provider to allot to that order. This creates a message on the cross-server PUB/SUB channel.
Then, the Channel microservice (which is on a totally different instance / VM) captures this message & sends a Push message & SMS to the service provider's device using the data within the message's contents.
Is this possible at all?
UPDATE TO QUESTION 2: I want the Order microservice to be completely independent of any other microservices that will be built upon / side-by-side it. Channel microservice is an example of a microservice that depends upon events taking place within Order microservice.
Also, please guide me as to what all technologies / libraries to use.
What I'll be developing on:
Java
MongoDB
Amazon AWS instances for each microservice.
Would appreciate anyone's help on this.
Thanks!
#1
If I need to show 100 orders and each order has a unique customer_id, will I need to do 100 REST API call?
No, just make 1 request with 100 order_id(s) and return a dictionary of order_id <=> customer_id
#2
It's a single request
POST
/orders/new
{
"selected_service_provider_id" : "123"
...
}
Which can return you order_id and you can print it locally for the customer or track progress or what have you.
On the server side, you receive an order and process it. Processing can include sending an SMS at some stage. This functionality can be implemented inside original service that received this request or as a separate call to another dedicated service.
To your first question, you don't need to do 100 queries, just one with the array of your 100 documents, like the following:
db.collection.find( { _id : { $in : [1,2,3,4] } } );
https://stackoverflow.com/a/7713461/1384539
I know this question is 1 year old, but I would like to add my answer to the first point.
One option would be to use some form of CQRS and store on the OrderDB also some of the user details when creating an order. This way when you have to show the list of orders you already have all the details you need. Also, the order document would represent a photograph of the user state at the moment of the order creation.
Of course, in case you don't have the user details when storing the order, you just need to make a GET call to the User Service, but that would be 1 call, not 100.
Considering the setup of kdb+ tick, how do the tables get pushed through the sockets?
In tick, it's possible to subscribe with a process (let's say) a to the tickerplant, which will then proceed to push the data of the subscribed 'tickers' to a as new data arrives.
I would like to do the same but I was wondering how. As far as I know, inter-process communication between q process is just the ability to transport commands from one process to the other, such that the commands will be executed on the other.
So how is it then possible to transport a complete table between processes?
I know the method which does this in tick is .u.pub and .u.sub, but it's not clear to me how the tables are transported between the processes.
So I have two questions:
How does kdb+ tick do this?
How can I push a table from one process to the other in general?
Let's understand the simple process of doing this:
We have one server 'S' and one client 'C'. When 'C' calls .u.sub function, that function code connects to 'S' using its host and port and call a specific function on 'S' (lets say 'request') with subscription parameters.
On getting this request, 'S request' function makes following entries to its subscribtion table which it maintains for subscription request.
-> Host and port of Client(incoming request)
-> Subscription params (for ex. clients send sym `VOD.L for subscription)
Now when 'S' gets any data update from feed, it goes thorugh it's subscription table and check the entries whose subscription param column value (sym in our case) matches with incoming data. Then it makes connection to each of them using their host and port from table and call their 'upd' function with new data.
Only thing is, client should have 'upd' function defined on their side.
This is a very basic process. KDB+ uses this with extra optimizations and features. For ex. more optimized structure for maintaining subscription table,log maintenance, replaying logs, unsubscription ,recovery logic, timer for publishing and lot more.
For more details, you can check definition of functions in 'u' namespace.
I have a database reader channel set up that actually reads the database at 10 second intervals and sends to a web service just fine. We get a valid response from the wsdl.
However, I need to update the database record so that it is flagged as having been processed. in this case we are simple changing a field from 100 to 101. However, when I try to update the field OR send an email containing ANY data that has been stored into mapper variables I get nothing. The database does not update. Emails send blanks for fields.
When I go into the channel messages for processed messages I can see good data in the Raw Message and Encoded Message tabs. There are no values in the Mappings tab.
Any suggestions on troubleshooting?
The Run-on-Update statement does not have access to the channel map, as it runs after message Encoding (and even the post-processor, I believe).
It DOES have access to the globalChannelMap and the responseMap. Put your new ID in the globalChannelMap and you should be good to go.
If you also want to send an email, would recommend you instead add an SMTP Writer destination (e.g., SMTP writer), which will have access to any channelMap variables created in a 'Destination 1'; as well as the globalChannelMap.
Is it possible to have Sync Services for ADO.NET read data from a table on multiple devices and insert it into a central SQL Server, having an additional column in the central table with the origin of the row data?
Let's say I have equipped door-to-door sales people with a device where they register sales. The local table would contain rows with sales information, and the central database would contain the same data + a column with the ID of the sales person.
Is that possible, or would I need the sales person's ID in the local database too?
Sync Framework identifies each client with a GUID (see: How To:Use Session Variables) and you can use that to map a particular client to a particular salesperson (see:Identifying Which Client Made a Data Change on either How to: Use Custom Change Tracking System or How to: Use SQL Server Change Tracking.
Or try the approach here for intercepting the change dataset and inserting/substituting the salesperson value: Part 1 – Upload Synchronization where the Client and Server Primary Keys are different