Imagine I wish to prototype a Bluemix application or simply wish to learn Bluemix. I understand that many of its services have free thresholds before any charges are accrued. Is there a way to set thresholds on my Bluemix account such that I am warned before exceeding free limits? Can I constrain my account such that it will disable services before charges are accrued or otherwise automate the mechanical constraint of my Bluemix utilization?
A potential example of such a need might be for the hobbyist who is self studying but does not want to incur charges or for the programmer who makes a logic error that results in excessive resource consumption or for the user who accidentally neglects to shutdown a resource consuming application after testing.
There are spending limits for paid accounts. According to the documentation, "you can set or edit notifications for total account, runtime, and service spending, as well as spending for individual services, excluding third-party services. You receive notifications when you reach 80%, 90%, and 100% of the spending thresholds you specify."
Related
I want to try two things in google cloud.
Nested virtualization to create bunch of simple office computers with gpu support (nvidia grid).
Premptible render nodes and AI training nodes , to work with some fault tolerant algorithms.
I have activated my account. Requested more GPU all regions increase. Mailed to the support. Tried to chat with technical support.
-- Mail delivery subsystem error. gc-team#google.com
-- Chat did not start, circle swirled forever
What should I do ?
Thank you.
As I have been informed, in some regions, google cloud prefers opening gpu's and some services through their solution partners. I have contacted sales, arranged a meeting. In this meeting, they redirected me to a solution partner. We signed a reseller contract, prices and services are same unless you ask for additional value-added support from local reseller. They gave me a sub-id and registered my company domain and e-mail to gcs. Everything works now.
Bottom line I get from this is:
If you wanna get serious with google cloud you may want to contact sales directly.
I was really satisfied by the billing structure of cloud functions. They only charge during the runtime of a function based on how much resources it consumes. I am looking for a similar solution for my game servers.
I checked various cloud hosting options like VMs but all providers like digital ocean, google, amazon etc. charge even when there is no load on VM (and I completely understand the reason for that).
I am looking for an option where I can deploy my game server and will charge me only when it will consume resources(based on how much resources it consume) and won't charge (or a minimal charge) when the server is idle. It's like it will auto scale based on the current load.
Thankyou in advance for all the answers.
You can try Squids.io(https://squids.io/), a fully managed database service provider allows you to power off when your server is idle, and you can restart it at anytime of your choice. Plus, it charges you no management fees.
My proposed implementation for Tableau server is a 2 server setup with multiple sites which will all me to segregate dashboards to different groups of users. Tableau provides all of this ability out of the box which is good. My question is really about in production, how do I ensure that a request for a dashboard does not consume 100% of the server resources and therefore other dashboard requests will be queued.
It is always good to have an example :-)
Imagine I have 3 'sites' defined in my Tableau server, let's give these names:
Sales
Marketing
Purchasing
Tableau server has users created to permit access to dashboards within each site. Within the Sales site is a dashboard that must run a complex query (I know I would refactor and use the facilities of Tableau to speed this up but this is purely for discussion) which takes a siginificant amount of time and aggregation within Tableau. During this period, how can I ensure that:
Other users within Sales can still access their dashboards?
Marketing and Purchasing are not impacted by the reduced resources on the Server.
Does Tableau provide any way of governing the amount of resources on the box that is assigned to each site?
You can limit the number of users that can be added to a site and can define a storage quota for a site, but I don't think there is a way to reserve CPU or network bandwidth for one site at the expense of others. All the users on the server contend for the same resources, regardless of which site defines the view.
If this is a critical factor in practice, you could stand up a second server to hold the high priority visualizations, thus reserving resources for them alone -- at the cost of a second license.
I'm trying to study about high frequency trading systems. Whats the mechanism that HFT use to connect with the exchange and whats the procedure (does it has to go through a broker or is it direct access, if it's direct access what sort of connection information that i require)
Thanks in advance for your answers.
Understand that there are two different "connections" in an HFT engine. The first is the connection to a market data source. The second is to a clearing resource. As mentioned in kpavlov's answer, a very expensive COLO (co-location) is needed to get as close to the data source/target as possible. Depending on their nominal latency these COLO resources cost thousands of dollars per month.
With both connections, your trading engine must be certified by the provider (ICE, CME, etc) to comply with their requirements. With CME the certification process is automated, with ICE it employs human review. In any case, the certification requires that your software demonstrate conformance to standards and freedom from undesirable network side effects.
You must also subscribe to your data source(s) and clearing service, neither is inexpensive and pricing varies over a pretty wide range. During the subscription process you'll gain access to the service providers technical data specification(s)-- a critical part of designing your trading engine. Using old data that you find on the Internet for design purposes is a recipe for problems later. Subscription also gets you access to the provider(s) test sites. It is on these test sites that you test and debug your engine.
After you think you engine is ready for deployment you begin connecting to the data/clearing production servers. This connection will get you into a place of shadows-- port roulette. Not every port at the provider's network edge has the same latency. Here you'll learn that you can have the shortest latency yet seldom have orders filled first. Traditional load balancing does little to help this and CME has begun deployment of FPGA-based systems to ensure correct temporal sequencing of inbound orders, but it's still early in its deployment process.
Once you're running you then get to learn that mistakes can be very expensive. If you place an order prior to a market pre-open event the order is automatically rejected. Do it too often and the clearing provider will charge you a very stiff penalty. Other things can also get you penalized or even kicked-off the service if your systems are determined to be implementing strategies to block others from access, etc.
All the major exchanges web sites have links to public data and educational resources to help decide if HFT is "for you" and how to go about it.
It usually requires an approval from exchange to grant access from outside. They protect their servers by firewalls so your server/network need to be authorized to access.
Special certification procedure with technician (by phone) is usually required before they authorize you.
Most liquidity providers use FIX protocol or custom APIs. You may consider starting implementing your connector with QuickFix, but it may become a bottleneck later, when your traffic will grow.
Information you need to access by FIX is:
Server IP
Server port
FIX protocol credentials:
SenderCompID
TargetCompID
Username
Password
Other fields
I am developing a customer account system for a chain of recycling centers in the Northwest US. One of our key features is that our customers can set up accounts that are credited with their bottle deposit refunds, instead of always disbursing cash. Customers can also drop off bags of recyclables that are processed on-site and credited. Each center runs near capacity and can physically process cans and bottles when offline, so we don't have a lot of leeway for IT infrastructure to shut down everything when the Internet goes out.
Basically, I've been asked to develop a customer account system that will allow credits from a retail center to be posted to accounts, even if telecommunications with our central server breaks down for a period of hours. This will allow the center to keep processing and crediting customers when the pipes get clogged. Certain transactions, like withdraws, do NOT need to occur in this situation, since we can't accurately get the customer's current balance.
We are a 100% Windows shop, and the IT manager and network admin don't want to get near anything *nix. Each retail center has an on-premise dedicated Windows Server, so that seems like a logical place to start.
I'm a huge fan of ServiceStack, and the REST-ful message-based paradigm seems like might work. I'd create a "Credit" message and send it to the local server. A message broker there would log the request and attempt to forward that message to the central server where it is processed. In case the central server were down, I would rely on the MQ's reliable messaging protocol to hold on to it until telecommunications are restored. The overall anticipated volume is 100s to low 1,000s of messages out of each center, so low by modern computing terms.
The Redis MQ Client / Server for ServiceStack looks interesting, but since the Windows Redis server is explicitly labeled "prototype" and "not production quality", there is a 0% chance of being able to leverage it.
So, ultimately the questions are:
Is a reliable messaging system the right type of solution for this problem? Are there other approaches I should consider?
Are there alternatives to Redis that play well with ServiceStack? Is there a "production quality" NoSQL server replacement I can use on Windows?
I've looked briefly at RabbitMQ. Might that be an option? My Googling doesn't show any active integration between it and ServiceStack, so I'm leery of writing something from the ground up.
Ideally the overhead of my solution is low enough we can perform a synchronous update and return a "current balance" receipt to a customer if everything is working well. Is this a realistic?
A production solution for running Redis on windows is to run redis-server inside a Linux VM on windows with Vagrant.
There is current a feature request to add more MQ Options to ServiceStack. Rabbit MQ is expected to be the next MQ adapter to be supported in future.
As a follow-up, MS Open Tech has released a "production-ready" native implementation of Redis 2.8.9. GitHub link.