We have a system with a service that has a frontend so that users can manually use the service. Users read information from a UI for the service and initiate actions through the UI by the service. Nothing unusual here.
However, we would like to continue to allow the manual use of this service by users but we would also like to automate the user, e.g. have a software agent we write access the same info and initiate the same type of actions.
Of course, the software agent wouldn't need to use the UI, it could query the view API and send commands to the write API. We can do this as we have a nice separation between our front-end UI and backend service.
My question is should we be automating a user by creating an agent like this OR should we be reimplementing the user as another service that interacts with the original service (e.g. an event-driven service)?
Either way the agent or service will be encapsulated within our system. The question is whether this component should use the service as the user would (but not through the UI) or implement equivalent functionality as a service
Thanks in advance for any thoughts, suggestions or pointers.
Cheers,
Ashley.
PS I am using the term "software agent" here because the component is somewhat replicating how a user would work, i.e. responding to information and executing actions. I don't mean this agent is an AI or mobile etc.
If automating the user is a feature of your application it makes sense to model it as a service to encapsulate that behavior. If the automation is for development/testing only, you might want to just script it.
Related
Imagine a very simple user creation flow in an online marketplace:
Service A (user service) receives the request and creates a user object and sends an async request to service B and C (e.g. via Kafka)
Service B (notification service) receives the request and sends an email to the newly created user
Service C (referral service) receives the request and credits some funds to the referrer
While this design might be laid out correctly in a design doc, it is only implicitly defined in code because the services talk to each other. How would you:
Ensure that the services are talking to each other in the correct order when implementing the user creation flow (integration tests might not suffice here since they generally test a very narrow set of path)?
Define and enforce SLO guarantees between services in production?
Debug which service is to blame when the flow breaks down?
This is a great question. And I think this scenario is a great fit for considering an orchestrator. A Microservices Orchestrator platform such as Netflix Conductor is designed to handle exactly these kind of scenarios.
With Conductor we can de-couple the flow and dependencies from the underlying functions itself and functions can be designed to do one simple thing such as saving user, notifying via email, credit referrals etc. We can then use the orchestration engine to assemble the required flow.
Such flows are executed really fast and the cost of latency is easily offset with the benefits you get.
Flow is defined as a workflow (this means the order can be controlled using the definition)
SLO guarantees - you can monitor for execution delays, failed transactions and retry and replay them as required. Latency required by an orchestrator is negligible
Debugging - with Conductor you'll get a UI that you can load up each transaction and look at what happened, which server executed it etc.
To explain these concepts better - I defined your use case here using some dummy APIs (this is a sandbox environment for Netflix Conductor)-
https://play.orkes.io/workflowDef/simple_user_creation_flow
And you can see an execution of this definition here:
https://play.orkes.io/execution/5095b5ef-3e2d-11ed-9d7b-1a5314838fe6
(For clarity - I work at https://orkes.io which offers a managed service for Netflix Conductor)
I'm maintaining SReview, a mojolicious-based webapp which needs to run a lot of background tasks that will need to change database state. These jobs require a large amount of CPU time and there are many of them, so depending on the size of the installation it may be prudent to have multiple machines run these background jobs. Even so, the number of machines that have access to the database is rather limited, so currently I'm having them access the database directly, using a direct PostgreSQL connection.
This works, but sometimes the background jobs may need to run somewhere on the other side of a hostile network, and therefore it may be less desirable to require an extra open network port just for database access. As such, I was thinking of implementing some sort of web based RPC protocol (probably something with JSON), and to protect the access with OAuth2. However, I've never worked with that protocol in detail before, and could use some guidance as to which grant flow to use.
There are two ways in which the required credentials can be provided to the machine that runs these background jobs:
The job dispatcher has the ability to specify environment variables or command line options to the background jobs. These will then be passed on to the machines that actually run the jobs in a way that can be assumed to be secure. However, that would mean that in some cases the job dispatcher itself would need to be authenticated with OAuth2, too, preferably in a way that it can be restarted at will without having to authenticate again and again.
As the number of machines running jobs is likely to be fairly limited, it should be possible to create machine credentials for each machine. In that case, it would be important to be able to run multiple sessions in parallel on the sale machine, however.
Which grant flow would support either of those models best?
From overview of your scenario it is clear that interactions occur among system to system. There is no end user (a human) user interaction.
First, given that your applications are executed in a secure environment (closed) they can be considered as confidential clients. OAuth 2.0 client types explain more on this. With this background, you can issue each distributed application component a client id and a client secret.
Regarding the grant type, first I welcome you to get familiarize with all available options. This can be done by going through Obtaining Authorization section. In simple words it explain different ways an application can obtain tokens (specially access token) that can be used to invoke OAuth 2.0 protected endpoint (in your case RPC endpoint).
For you, the best grant type will be client credential grant. It is designed for clients which has a pre-established trust with OAuth 2.0 protected endpoint. Also it does not require a browser (user agent) or an end user compared to other grant types.
Finally, you will require to use a OAuth 2.0 authorization server. This will registered different distributed clients and issue client id, secrets to them. And when a client require to obtain tokens, they will consume token endpoint. And each client invocation of your RPC endpoint will contain a valid access token which you can validate using token introspection (or any specific desired method).
I know typically a process is either a service provider or client over D-Bus, is it practically possible that a process be both a service and client (I think it's okay)? I have such needs in my project, originally there is a service provider and client, some requirements come in, I need the original client to provide service as well. Is there any downside if it's theoretically doable?
Yes, it’s possible, straightforward to do, and there are no downsides as long as it’s a suitable architecture for the problem you’re trying to solve.
Many system services already do just this: they expose a system service on the bus, and also act as a client with other system services which provide information to them.
I have a client company with a simple web application (Python Flask) and I need to add a phone notification functionality to it.
The main requirement is that the app should call users, play a certain sound file and accept some tone input ("Hello! This is an automated message from your WebApp account. You have a meeting with $John today at $5pm. Please press 1 to confirm").
The other requirement is that the solution should be relatively cheap and fast to market.
I have done some research already and it seems that there are a few consequent steps to achieve that:
Set up an Asterisk or a FreeSwitch server;
Set up a SIP account;
Write some business logic for the Asterisk server which allows to make calls and play sounds via a SIP account;
Write an API at the Asterisk server and expose it to the Python Flask web app.
Do I miss something here? Can any of the steps be omitted anyhow? Can I do it simpler?
the fastest way to get it working is to use one of the cloud voice services with speech synthesiser. Here's a short list to check out:
Twilio
Tropo
Plivo
Here I listed some details.
Those services charge you per minute, plus you may have to pay some monthly fee.
If you want to run an independent and standalone service, I would recommend FreeSWITCH instead of Asterisk. It's got reach integration possibilities and API. You will need to read the FreeSWITCH book in order to understand how it works and how to build your service.
I agree with Stanislav Sinyagin on the cloud based solutions, but I would add one more, Voxeo Prophecy. Tropo is from Voxeo, but they have offered Prophecy as a solution for a lot longer and it supports the open standards CCXML and VoiceXML. The advantage of CCXML for outbound notification applications is you have a lot more control of the notification process.
The Prophecy platform has excellent call progress analysis (CPA) which will allow you to determine whether a machine or a human answered and handle the call accordingly. For example, it does not make sense to ask a machine to "...press one to confirm". Instead you may want to leave a message that provides a call back number for the user to confirm with after they have listened to the voice message. The CPA can be used to leave a message on a machine at the correct time (when the greeting message has stopped) so that you do not get clipped messages in the voice mail. CPA will also allow you to provide detailed reports on who was notified and for those that did not it can tell you whether it was a bad number (received a SIT tone), a modem or fax answered, or ring-no-answer (pretty rare these days). These type of details can factor into your retry process for failed notifications.
The other advantage to using Prophecy and open standards is your application will be portable to other IVR systems that are VoiceXML/CCXML compatible if you ever want to migrate. Tropo, Twilio, and Plivo all use proprietary API's which does not allow you to move your applications to other services. Prophecy is also available as a software solution so that if you want to take it out of the cloud you can run it on premise. You can get a two port version for free to try it out.
There is excellent documentation on developing outbound notification systems on Voxeo's developer site. Take a look at the CCXML documentation in section F on Outbound Dialing.
Not sure which development languages you are familiar with, but if you are used to ASP.NET MVC there is an open source project called VoiceModel that makes it easier to develop VoiceXML applications. The other advantage of VoiceModel is that you develop your application once and it will run on any VoiceXML compatible platform and Tropo. They are currently working on adding outbound notification support in this project that will work for both Tropo and VoiceXML.
Third party solutions listed are your easy choice. Running your own asterisk is also suitable for what you want to do, but i think for only this much it would be overkill, from an operational perspective.
In asterisk, you can originate a call that has the 2 variables you need with an (basic-authenticated) HTTP request. You will also need some settings and a tiny dialplan. Setting up the SIP account is easier or more difficult, depending on the documentation from the provider. Most of them have detailed documentation for configuring asterisk (not so much so for freeswitch). Keeping the damn thing alive is what's gonna get to you :)
Apologies if something similar has been asked in the future but my search didn't return, what I would consider, directly related.
I am trying to implement a service with its backend in AWS EC2/S3 and front-end in iPhone and the service is more or less like a todo-list. This is not a novel idea but will help me in a class I teach about IT infrastructure.
Unfortunately I have access to only my own iPhone and I cannot demonstrate scalability over AWS, etc.
Is there a way/software tool/framework to simulate virtual users for this app that can send requests to the AWS servers pretending to be from different accounts/apps?
The simulator should send requests just like my actual iphone app would send if I were to add an item to the list or delete or edit.
I understand stress testing is a well established topic but here I want to just simulate multiple users and demonstrate scalability instead of trying to push the Web service to its limits. Neither am I sure if this completely overlaps with traffic simulation.
Any help will be deeply appreciated.
You might be able to do it using Apache JMeter. That depends on what you have going on on the backend. But it supports the following server types:
Web - HTTP, HTTPS
SOAP
Database via JDBC
LDAP
JMS
Mail - SMTP(S), POP3(S) and IMAP(S)
Native commands or shell scripts
You should be able to wire something together with that.
http://jmeter.apache.org/
http://www.opensourcetesting.org/performance.php
I've used it at various points to simulate VERY heavy loads for my services running in AWS/EC2.
Apache Benchmark is a very convenient tool for doing HTTP load testing -- you can have it make concurrent requests to simulate multiple users. It's main advantage over other tools is that it's simple and easy to get started with. If your backend listens on HTTP, it might be worth trying ab before investing any time in something more complex.