almost live forex currency rates [closed] - forex

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I need to get live forex exchange rates for my personal application. I know that there's no free service that has this data available for download. I've been using Yahoo Finance, but I've just found out that it has a delay of 15 minutes or so. Is there any way I could get fresher rates somewhere? Say, 5-minute old instead of 15?
Many forex brokers offer free "informers" that autoload data in an interval of seconds, so maybe there's a few that allow this data to be downloaded in bigger intervals without the use of their informers strictly for personal use?

TrueFX has free real-time (multiple updates per second) forex quotes, but only for a limited number of pairs: http://webrates.truefx.com/rates/connect.html?f=html
They also have free downloadable tick data for the same pairs, going back to May 2009: http://truefx.com/?page=downloads
You can get real-time quotes for a larger selection of pairs from FXCM: http://rates.fxcm.com/RatesXML
Realtime rates for about 40 currency pairs are available here: http://1forge.com/forex-data-api, eg: https://1forge.com/forex-quotes/quotes
They also have free downloadable tick-data, going back to 2007, but you need to create a demo account and use a COM based Windows API called Order2Go to retrieve it.
They promised that they will make available the same tick data in CSV format for free sometime this year here: http://www.forexcodesource.com/index.php/Category:Historical_Data

Here are a bunch of equity/fx data providers, however they are not free.
http://finviz.com/store/market-data-providers.ashx
If you're trying to keep everything free, then you'll probably have to hack something together.
For example, in MT4 there is a DDE hook that you can use to broadcast the quotes. You'll need a windows box(or vm) running MT4 and an app listening to the DDE server, that would forward the quotes off to your linux server via a TCP socket, or even HTTP. The lag should be less than a second if done right.
Here's the .net library I use to receive the DDE quotes.
http://www.4xlab.net/cs/forums/136/ShowPost.aspx
Also, if you are looking for historical tick data, then this is a great source.
http://ratedata.gaincapital.com/

download metatrader from any broker, and write an expert adviser to log all the data you want to a file. have another process that read the file. if you really want to get fancy, you can call c functions from mt4 code. its not that hard to write some c code to store data to a db instead of logging it to a file.

Related

How to configure a resources in a pool to handle several agents [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I am trying to simulate a call center with chatting and in this scenario, a customer service representative can serve multiple customer chats at the same time, depending on their capabilities
I started nby creating an Employee agent and build on this, but I could not simulate a scenario in which one “Employee agent” can serve several client “chat” agents at the same time based on their total capacity, as in a real chat call center ...
Please advise how I can configure the logic so that several agents can capture / delay one resource. Or create a block in which the employee agent will bypass each chat and check if he can release it.
Thanks in advance
This is a more advanced question and not that easy to answer in detail without building a lot of logic and functionality.
Overall I can suggest the following design, but depending on your level of expertise in AnyLogic (And Java) this might not be the best design and I am curious to see if anyone will venture any other options. But as for a moderate user (and use case), this design will be sufficient
Since there is no way to do what you asked with a standard resource pool I would suggest to setup a resource pool inside a new agent type and then either as a population or graphically (as per my design) you can send chats to these agents. Since each agent has a resource pools inside of them you can define the number of chats an agent can handle in the parameters of the agent which defines the resources in the resource pool
You can then have a function that takes a chat from the queue and gives it to the first available agent that has capacity.
And you call this function whenever something arrives in the queue as well as when something leaves a chat agent and also when a agent gets a new chat as multiple chats might arrive at the same time and we only send the first one.

Best way to use API in app that has limited use? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
My situation is that I am building an app (in swift) that pulls data from an API & displays them in a tableview (say for example up to 500 cells). The problem occurs with the API. It is limited to 200 calls/day and 6k/month, and one request is equal to 100 pieces of data, so to display 500 cells it would cost 5 call credits.
I am stuck on how to efficiently use this API. Currently, each time the user refreshes the tableview it will cost 5 credits. Therefore after this has been done 40 times, the API cap has been reached for the day.
The only solution I have though of is to have some script in js/ruby/python that pulls the data every x minutes or x hours and saves this to Firebase databse or firebase Cloud storage and then in my app I can pull the data from Firebase?
My other idea was to run the script on a server and pull the data from there.
Is there any other simpler alternatives that I am missing?
To prevent over consuming why not you run the API and save the results to your own DB; create a custom API specific for your app to pull from your personal storage and this way you can control the interval and frequency of how often you pull on the premium API.
You can setup a job to auto update your personal DB with the premium data every x amount of time, update new entries and add new ones as you see fit while on the client side they will pull the same premium data you’ve pulled; imo that would be how I would go about because without control you’ll find yourself facing a major scaling issue.

Do I need to worry about timezones in my application? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I currently have a functional version of my application (web application). My application allows users to schedule appointments online. When submitting appointments do I need to account for time zones? so if someone lived in new york using my applications which are based in California would I need to make adjustments to the time for which the appointments are getting stored as the date time stamps?
I see UTC all over the web without much understanding if time zone still plays a role in all of this. Can someone guide me as to what approach I need to take for my application?
Thank you for your time! Its very much appreciated!
A UTC date is a date/time which is globally the same. Each of the users of your application has a local offset to this date/time, so your application can calculate the correct datetime for each one of your users, based on the UTC date and the offset.
This offset is based on the users location ofcourse, which (depending if you're talking about a web application or desktop or phone app, which you don't mention) might be provided by a browser via JavaScript or through the underlying OS.
So, in short, store all dates as UTC, and calculate the correct date per user to display it in your application.
Most programming languages have functionality for this built-in or available via extensions. Google for "locale" or "Localization" (l10n) in combination with the language or framework you're using to find out how to implement this exactly.
edit: you ask specifically about appointments in your application; if your application runs on a webserver, the webservers datetime settings are used, so this might get you into trouble when you compare a datetime from your user to the current time on your server (to see if an appointment is due, for example)
It will also fail when you want to compare an appointment from two users in different timezones to see if they overlap.
If your application only runs locally, without central storage, it will propably work without resorting to UTC datetimes.
Yes, you need to properly handle the time zones as the calendar should take the system date as it will be different for different user across the globe.
Yes, take timezones into account.
If you can adjust the dates and times to UTC, before you transmit and store them, then you will easily be able to compensate for the strangeness of Daylight Saving times.
If you only adjust the UTC times back for display and input, based on your user's locale, then you can schedule across the world.

Best database for a Statistics System [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I need to build a Statistics System but I don't know if MongoDB would be the best solution. The system needs to track couple of things and than display the information. For example of a similar thing - a site, and every user that first visits the site adds a row with information about him. The system needs to store the data as fast as possible, and, for example, it creats a chart of the growth of users viewing the page using Google Chrome. Also, if a user visits again, a field in the users's already row is updated (say a field called "Days").
The system needs to handle 200,000 new visits a day (new records), 20,000,000 users visits again (updates) a day, and 800,000,000 DB records. It needs also to output the data fast - for example, creating a chart of how much users visits each day from England, using Google Chrome, etc.
So what would be the best DB to handle this data? Would MongoDB handle this fine?
Thanks!
Mongodb allows atomic updates and scales very well. That's exactly what it's designed for. But keep in mind two things: beware the disk space, it may run out very quickly and if you need quick stats (like region coverage, traffic sources, etc.), you have to precompute them. The fastest way is to build a simple daemon for this that would keep all numbers in memory and save it hourly/daily.
Redis is a very good choice for it, provided you have a lot of RAM, or a strategy to shard the data over multiple nodes. it's good because:
it is in memory, so you can do real time analytics (I think bit.ly's real time stats use it). in fact, it was originally created for that.
it is very very fast, can do hundreds of thousands of updates a seconds with ease.
it has atomic operations.
it has sorted sets which are great for time series.
RDM Workgroup is a database management system for desktop and server environments and allows in-memory speed as well.
You can also use its persistence feature; where you manage data in-memory and then transfer that data on-disk when the application shuts down so there is no data loss.
It is based on the network model with an intuitive interface so its scalability is top-notch and will be able to handle the large load of new visitors that you will be expecting.

Perl video output [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
What's the best way of making a video as an output of a Perl program?
Video what? You can always use a simple graphics library like GD and a whole lot of ffmpeg to do what you want.
If you can figure out how to produce a data stream that ffmpeg's yuvmpeg4pipe input module can handle, then you could send your data into a fifo to avoid hitting the disk with with intermediate data. Being that the yuvmpeg4pipe seems to just be a header-less data stream it should be fairly easy to replicate.
This link might give you some ideas: http://kylecordes.com/2007/pipe-ffmpeg
You could also try setting up either a memory mapped file or ramdisk of sorts to write into. But even a system with 16 gigs of ram is going to fill up very quickly when working with uncompressed video.
In general it is usually better to just write out the uncompressed files (probably an image sequence in your case) and then compress it after its exported. The reason being, if you are doing anything interesting in the video, it will probably take many times longer to render the uncompressed frames than to compress the video. By saving the uncompressed copy, you are free to compress to different targets, or fine tune your compression settings...
In addition, working with image sequences opens the door to parallel processing on multiple cores or even multiple computers. This is how many commercial video rendering systems achieve greater speeds.
DOES NOT WORK!!!
UPDATE: Please ignore the below answer - upon reading through FFmpeg's source code, the URL input is not streamed - it's merely downloaded whole into a file and then regular file processing is done :(
I'm still leaving the answer up in case someone looking later find the FFMpeg's existance a useful info for Perl video processing, even though it doesn't help in this specific case.
ORIGINAL ANSWER
FFmpeg does not (based on POD) seem to allow an in-RAM sources, but it does allow URL based ones. So at the very least you can hack around needing to do disk IO by streaming your raw data via Apache or some smaller web server, and using the FFmpeg's URL input to retrieve that data from http://localhost:yourport. The raw data would natually come to the web server from a Perl program running under mod_perl/FCGI