Need an assist with a deployment diagram - deployment

I have a school project with 3 parts. One of them is to make a deployment diagram of the following scenario:
A railway company got a new information system that enables customers to buy tickets and to check-in. At first release, the customers need to call the railway's customer services and do those thing via the dispacher. Second release- the customers can do those things via a web site, last release- via an app.
One a day, the system updates the data base.
I have made a deployment diagram that I thik is OK, but I want to get some suggestions.
Also, which protocol is used to connect with the database server from the regular server?
(the link to the diagram ->)
The Diagram

Your layers will be on same machine or different machine/server?
if its on same we need not to worry about protocol we will access via localhost only
If you have different servers you need to use SSL as well, there are lot of service provider which provides mysql online via ssl

Related

How to build microservice applications right? (backend, frontend, database)

I am developing monolithic applications since many years and want to try microservices and containers now.
For learning microservices and containers I am planning a small calendar application (as a web application), where you can create dates and invite others to them. I identified three services that I have to implement:
Authentication service -> handles login, gives back a JWT
UserData services -> handles registration, shows user data, handles edits of user data (profile picture, name, short description)
Calendar service -> for creating, editing and deleting dates, inviting others to dates, viewing dates and so on.
This is the part I feel confident about, but if I did already something wrong, please correct me.
Where I am not sure what to do is how to implement the database and frontend part.
Database
I never used Neo4j or any graphbased database before, but they seem to work well for this use case and I want to learn something new. Maybe my database choice may not be relevant here, but my question is:
Should every microservice have its own database or should they access the same database?
Some data sets are connected, like every date has a creator and multiple participants.
And should the database run in a container or should it be hosted on the server directly? (I want to self-host the database)
For storing profile pictures I decided to use a shared container volume, so multiple instances of a service can access the same files. But I never used containers before, so if you have a better idea, I am open to hear it.
Frontend
Should I build a single (monolithic) frontend application, or is it useful to build some kind of micro frontend, which contains only a navigation and loads other parts like the calendar view or user data view from the above defined services? And if yes, should I pack the UserData frontend and the UserData service into one container?
MVP
I want the whole application to be useable by others, so they can install it on their own servers/cloud. Is it possible to pack the whole application (all services and frontend) into one package and make it installable in kubernetes with a single step, but in a way, that each service still has its own container? In my head this sounds necessary, because the calendar service won’t work without the userdata service, because every date needs a user who creates it.
Additional optional services
Imagine I want to add some additional but optional features like real time chat. What is the best way to check if these features are installed? Should I add a management service which checks which services exist and tells the frontend which navigation links it should show, or should the frontend application ping all possible services to check if they are installed?
I know, these are many questions, but I think they are tied together, because choices on one part can influence others.
I am looking forward for your answers.
I'm further along than you but far from "microservice expert". But I'll try my best:
Should every microservice have its own database or should they access the same database?
The rule of thumb is that a microservice should have its own database. This decouples them, making it so that your contract between services is just the API. Furthermore it keeps the logic simpler within a service in that there's less types for data that your service is responsible for handling.
And should the database run in a container or should it be hosted on the server directly?
I'm not a fan of running a database in a container. In general, a database:
can consume a lot of resources (depending on the query)
sustains long-lived connections
is something you vertically scale, rather than horizontally
These qualities reflect a poor case for containerization, imho.
For storing profile pictures I decided to use a shared container volume
Nothing wrong with this, but using cloud storage like Amazon S3 is a popular move here, just so you don't have to manage the state of the volume. i.e. if the volume goes away, do you still want your pictures around?
Should I build a single (monolithic) frontend application?
I would say yes: this is would be the simplest approach. The other big motivation for microservices is to allow separate development teams to work independently. If that's not the case here, I think you should avoid yourself the headache for micro-frontends.
And if yes, should I pack the UserData frontend and the UserData service into one container?
I'm not sure I perfectly understand here. I think it's fine to have the frontend and backend service as part of the same codebase, which can get "deployed" together. How you want to serve the frontend is completely up to you and how it's implemented. Some like to serve it through a CDN to minimize latency, but I've never seen the need.
Is it possible to pack the whole application (all services and frontend) into one package and make it installable in kubernetes with a single step?
This is use-case for Helm charts: A package manager for kubernetes.
Imagine I want to add some additional but optional features like real time chat. What is the best way to check if these features are installed? Should I add a management service which checks which services exist and tells the frontend which navigation links it should show, or should the frontend application ping all possible services to check if they are installed?
There's a thin line between what you're describing and monitoring. If it were me, I'd pick the service that radiates this information and just set some booleans in a config file or something. That's because this state is isn't going to change until reinstall, so there's no need to check it on an interval.
--
Those were my thoughts but, as with all things architecture, there's no universal truth. Some times exceptions are warranted depending on your actual circumstances.

Connecting to Oracle from iOS App

I know this has been asked a few times, but there seems to be no clear answer ... am searching on this for the past 3 days or more.
There seem to be 2 ways to connect to an Oracle database from an iOS App :
ODBC Client
I need to compile ODBC (which ODBC?) using gcj for ARM. I think this is the hard way, wrought with errors, but possible with quite an effort.
USING WEB SERVICE
Connect from App to webservice and from web service to Oracle DB.
Are these the 2 methods available or any other?
Few questions on the two methods:
a. Which is more secure?
b. Will my company's security department oppose to any of the above?
c. Which is more performant?
d. Which of the above does one normally use?
Webservices are the answer, you do not want people connecting directly to the database from a mobile device. A Webserver will add one extra layer of security as well as the ability to handle simultaneous request without stressing the database directly
a. Which is more secure?
Webservices as explained above
b. Will my company's security department oppose to any of the above?
Yes, security department will insist not to open the oracle port to connect directly, unless they have it already open.
c. Which is more performant?
Webservices, setting up the right cache policies in a webserver can save resources to the database.
d. Which of the above does one normally use?
Webservices, because they offer you great advantages in security and performance, not only that, webservices are reusable and can be accessed by many different platforms, think on the future you might want to serve your application later on Android devices and Webservices will save you a lot of development time.
Many of today's top applications in the market use webservices, think about it.
Google Maps is a great example of how powerful webservices are!
It's not a good idea to connect to your database directly from your app. It can be secure if you create an account that can do nothing but SELECT, but there are some other things to consider.
Why burden the app with the Oracle client?
If you have many users you have to worry about Oracle handling a huge number of simultaneous connections. With a Restful API requests are stateless.
If you decide to change your schema. You'll also have to change your app. When you place a service in between, the app is no longer dependent on the schema.
ODBC connection will require that the Oracle port is open to the Internet, which in vast majority of cases will not be allowed for security and performance reasons. Even if it were, or even if you establish a secure VPN, a direct database access requires that the connection is kept open, which can be problematic when a mobile device can go in and out of the network coverage.
HTTP is far more tolerant to unreliable networks and can be encrypted using SSL (HTTPS). The problem with HTTP is that database do not have direct support for this transport so most people develop dedicated web services.
I work on a project called SlashDB, which automatically constructs RESTful APIs out of databases. For public APIs you would install /db in so called DMZ (a network segment between two firewalls) as described in this blog post.
SlashDB can be configured to allow restricted data access to public users or you can define specific users with varying privileges to data. It is designed as stateless service, which means that you can easily set up multiple nodes behind a load balancer and reverse HTTP proxy for high availability web scale deployments.
Regardless whether you develop the web service by hand or use our product you will achieve better scalablity, performance and security for your solution than by using direct client/server approach. I would even argue that REST APIs should be used internal enterprise data integration solutions but that's a whole new topic.
I am going to repeat what everyone else said, Rest API is the way to go. Do not connect to the database directly. However, there might be a way to connect to your database which I never tried my self.
http://odbcrouter.com/iosvsweb#hn_iOS_Open_Database_Connectivity_SDK

Building a webportal which will be rented to customers. Need an Architecture Suggestion

Iam building a web portal which will be rented to customers on a hosted model (SAAS), where they will be using the entire portal features on their own domains with their own branding.
Now I don't want them to get the files of my web-portal, but still be able to use a custom branded portal.
One solution which someone suggested here was to host the branded version on my server and all it via an Iframe on the customer's domain. However I didn't like the idea very much.
One second approach which I researched and found was to host the portal on a fresh IP in my server and ask the customer to point his domain to that ip.
The webportal will be sold to lot of customers and they all will have separate User Interfaces and brandings, so this is needed.
Please suggest me what do you feel about my approach or if you guys have a better idea in mind please pour in your suggestions.
iFrames are evil.
With that said I would probably go with a subdomain approach. They add a subdomain like webportal.somecompany.com that points to you and have your webserver route them to the correct hosted instance of your application based on subdomain. That way their www.somecompany.com still goes to their website.
We're running a SAAS application that supports branding, and we do it by dynamically serving up CSS. If all of your customers have a unique domain name pointed at your server, you could select your CSS files by domain name: If a customer logs in at "http://portal.customer.com/login", you can have his HTML link to the file "/stylesheets/portal.customer.com.css", and so forth. Alternatively, you can create a subdomain for each of your customers, and point them all at your master server, using very similar code to pick the CSS.
This lets you have a single IP address for all customers (and only as many servers as you need to support all your customers behind that IP address), instead of one IP address / server per customer - should cut save on hosting costs!
(NOTE: I'm leaning toward the subdomain approach, the more I think about it. If you're using HTTPS, it would let you use a single "*.yourdomain.com" certificate, rather than trying to mess with separate certificates for each client domain.)
You don't need to run different IPs for different customers. HTTP 1.1 supports Host: like so
GET / HTTP/1.1
Host: example.com
This is how most shared hosts work. When a customer sets up their DNS records to point at your server/load balancer, the incoming requests will have your client's hostname in the headers. Whether you set up virtual hosts in say Apache or do it at the application level is up to you.
Please for your own sake don't do iframes. There's a lot of information on the web on architecture for multi tenant applications.
I made the experience that in such a scenario your customers will come up with any possible web UI requirement you can imagine. Therefore it is rather difficult to build a web UI framework that can accomodate to all the needs, in fact this would rather be a content management system.
Furthemore, for building the web UI, you may meet any combination of customer in-house development, 3rd party web agency or request to get it developed by yourself.
In such situations I made good experiences with offering the SaaS as actual web services allowing custom developed portals to run on top. With this, anybody can build the actual portal with the clients look and feel. You could offer development and hosting as an option.

Exposed onsite vs IFD deployments for MS Dynamics CRM

I'm working for the first time on a MS Dynamics CRM 4.0 project. Our company has a high number of remote employees and even more remote consultants. As such it will be necessary to make the CRM solution available over the internet. As near as I can tell, I have three options:
Have everyone use a VPN to access an intranet site (typical onsite deployment). However, we have found that VPNs are far from trouble free and cause many support issues. We avoid them like the plague.
Use IFD to expose the CRM on the internet. I don't know much about this except that the URL will be different than the onsite URL, which could cause some headaches (see below).
Expose the CRM site by opening the site to the internet, using SSL to encrypt traffic. We currently do this with our MS sharepoint sites. I'm not sure how secure this would be (one of the reasons for this question).
I'd like to avoid using both the onsite intranet deployment and the IFD together for a couple of reasons. One of the requests for the solution is use email to notify users that they've been assigned a task, and include the URL to the task within the email. For this reason. If both deployments are used, then I'll need to include two URLs and the user would need to know which to use. Which leads to the second reason, the main users of the solution split time between being in the office and being remote. Thus they would need to access the solution two different ways, and know when to use which. Bad.
So, what are the advantages/disadvantages of any of these methods? Any other options? Is there any issue using IFD from within the intranet? Security issues?
UPDATE:
Be sure to read the comments of the accepted answer, as they further explore the ramifications of the various options.
That is the best, you have all the security, and a low maintenance deployment. Also if you are developing custom code (ASPx pages) you will have only one deployment type to support. If your users are already using a VPN Client, this should be the best solution.
This is the Microsoft way to do it. Except for the URL duplication. This solution is used in companies where clients do not want to have a VPN client, or where VPN clients can't pass thru firewalls. Also this solution is almost required if your outlook clients are using "Connect to exchange thru the web". Because in this case all the clients can open Outlook without VPN, CRM should be exposed without VPN, and the IFD deployment is handled natively by the Outlook client. Note that this is SSL enabled (required). EDIT : It's not required, but a best practice, even if the implementation guide says "You must define a URL for the Microsoft Dynamics CRM IFD by using the following format: https://".
This this the worst of all the worlds, you have to maintain the deployment manually, and you will have all the headache of using a deployment as we were using it in CRM 3.0 (NTLM, Kerberos etc). I do not recommend this.
You can use an IFD deployment thru the intranet, but there is some buggy behavior. The external DSN should be configured on the internal DNS server, so that internal client can access the internal server. And because IFD is SSL enabled, you are encrypting internal traffic...
Hope this help!

How can I centralize an entire website engine?

Prefer responses via php or ROR if possible!
Example:
The slide widget at www.slide.com can
be deployed anywhere on the web. But
the slide developers have
centralized edit capabilities to these
widgets. A change to the widget core,
will update across all installed
widgets.
Can this be done with an entire website engine?
Say I coded the Wordpress
engine. Is it possible for me to
deploy my engine on my customers own
servers at their own domain while
still being able to
control/update/edit the core and have
it update all my clients engines.
The main reason is for clients to have ownership of their content and brand. A customer may need to establish his online presence and so he needs his own domain, but he still can pay a service to have his engine professionally managed and upkept.
As far as I can tell the flash objects in the slide.com widgets are hosted on their servers, all that is deployed elsewhere is markup that references the widget similar to embedding a YouTube video. Unless there's something there that I don't see.
There are 2 ways you could do what I think you're asking.
1) You could provide a service that hosts the web application on your servers. The user could point their domain to your server and you would run the application under a virtual host and give them administrator access to the application to brand it. Essentially you would be providing a web hosting service with the application engine pre-installed.
2) You could allow them to deploy it on their servers and provide automatic updates that would download and replace the application files when they are changed in the core. However this would require script write access to every folder where scripts are stored, so it would pose security problems on their server.
Depending on the type of application you could also have much of it deployed as services on your servers, and some type of basic wrapper that is deployed on the customers servers that communicates with your services. You could then centralize all of the business logic and have images and templates and such stored on their servers, and possibly scripts that communicate with their database.