I have been using SCDF for a while, and realise the main diffs between XD and SCDF is XD is born to be distributed, but SCDF seems work like a platform for SC stream apps. At least local server works like so.
So my question is, is it possible that scdf local server being distributed? I see no trends on local server being distributed.
Any idea on this? thanks
As you might have known already, the local server is not intended for production deployments and for the development purpose only.
The local SCDF server is not intended to have distributed use case as there is no co-ordination service for the number of local servers running. But all the apps deployed by the local server run as separate processes.
Related
Say you have 3 or more services that communicate with each other constantly, if they are deployed remotely to the same cluster all is good cause they can see each other.
However, I was wondering how could I deploy one of those locally, using minikube for instance, in a way that they are still able to talk to each other.
I am aware that I can port-forward the other two so that the one I have locally deployed can send calls to the others but I am not sure how I could make it work for the other two also be able to send calls to the local one.
TL;DR Yes, it is possible but not recommended, it is difficult and comes with a security risk.
Charlie wrote very well in the comment and is absolutely right:
Your local service will not be discoverable by a remote service unless you have a direct IP. One other way is to establish RTC or Web socket connection between your local and remote services using an external server.
As you can see, it is possible, but also not recommended. Generally, both containerization and the use of kubernetes tend to isolate environments. If you want your services to communicate with each other anyway being in completely different clusters on different machines, you need to configure the appropriate network connections over the public internet. It also may come with a security risk.
If you want to set up the environment locally, it will be a much better idea to run these 3 services as an independent whole. Also take into account that the Minikube is mainly designed for learning and testing certain solutions and is not entirely suitable for production solutions.
I wanna deploy the Spring-cloud-data-flow on several hosts.
I will deploy the server of Spring-cloud-data-flow on one host-A, and deploy the agents on the other hosts(These hosts are in charge of executing the tasks).
Except the host-A, all the other hosts run the same tasks.
Shall I modify on the basis of the Spring Cloud Data Flow Local Server or on the Spring Cloud Data Flow Apache Yarn Server or other better choice?
Do you mean how the apps are deployed on several hosts? If so, the apps are deployed using the underlying deployer implementation. For instance, if it is local deployer then, each app is deployed by spawning a new process. You can scale out the number apps deployment using the count property during the stream deploy. I am not sure what do you mean by the agents here.
If I've understood the docs properly, azure service fabric-based apps/microservices cannot be installed together with their service-fabric operational environment in one "packaged installer" step. For example, if I want to deploy a set of microservices on premises at a company that is running a typical windows server 2012 or VMWare IT center, then I'm out of luck? I'd have to require the company to first commit to (and execute) an installation of an azure app service fabric on several machines.
If this is the case, then the Azure Service Fabric is only an option for pure cloud operations where the service fabric cluster can be created on-demand by the provider or for companies that have already committed to azure service fabric. This means that a provider of classical "installer-based" software cannot evolve to the azure service fabric advantages since the datacenter policies of the potential customers is unknown.
What have I missed?
Yes, you always have to have a cluster to run Service Fabric Applications and Microservices. It is however not any more limited to a pure cloud environment, as of September last year the on-premise version of Azure Service Fabric for Windows Server went GA (https://azure.microsoft.com/en-us/blog/azure-service-fabric-for-windows-server-now-ga/) and that lets you run your own cluster on your own machines (whether physical or virtual, doesn't matter) or in another data center (or even at another cloud provider).
Of course, as you say, this requires your customer company to either have their own cluster or that you set one up for them (https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-creation-for-windows-server). They will also need to have the competence to manage that cluster over time. It could be argued though that this shouldn't be much more difficult than managing a VMWare farm or setting up and managing say a Docker container host(s).
For the traditional 'shrink-wrapped-DVD-installer-type' of software vendor this might not be as easy as just supplying an .exe and some system requirements, i agree with you on that. If the customer can't or don't wan't to run their own cluster and cloud is not an option then it definitely adds additional complexity to selling and delivering your solution.
The fact that you can run your own cluster on any Windows Server environment means that there is no real lock-in to Azure as a cloud platform, I think that this is a big pro for SF as a framework. Once you have a cluster to receive your applications then you can focus on developing that, this cannot be said of most other cloud-based PaaS frameworks/services.
Was wondering if anyone out there has any experience in deploying a Zend community app to the cloud (e.g. AWS or similar)?
I'm new to cloud hosting having always been fortunate enough in the past to work for folks who have dedicated servers, my main concern (non-zend specific) is how you manage resilience at the database level? FOr example I would in a traditional setup have 2 boxes running the DB (Mysql) in Master/Slave mode with the master replicating to the slave. Assuming any HD failure of the Master I could swap the DB connection over from the Master to the slave and rebuild master at a later point? is this done differently in the cloud?
Any help/pointers greatly appreciated?
It depends on the type of cloud service that you use. If you're using AWS to get your own virtual machine ( Amazon EC2 ) then it's basically the same as having a dedicated server and you can keep a master slave setup and work them much the same way.
However, if you plan on using Amazon's cloud database service ( Amazon Simple DB ) then you don't have to worry about masters and slaves since Amazon does this for you and makes sure that you always have access to your data. The only thing is that it's in beta.
One of the points of the cloud is to take your mind off the hardware. Amazon worries about that.
You might still want to have two virtual machines in case amazon is doing maintenance that might cause your vm to become unavailable, however, Amazon stresses that it would be highly available and never go down really, so long as you pay.
Can you use mongodb and redis/memcached with azure?
I'm guessing no but just want to make sure.
It turns out they do support things other than .net, are they using linux servers then?
You can very easily run mongodb in Windows Azure. I presented this at MongoSV - video here.
EDIT: In December 2011, 10gen published their official MongoDB+Azure code on github. This contains a project for replica-sets, as well as a demo ASP.NET MVC application (taken from the Windows Azure Platform Training Kit) that uses a replica set for its storage.
Standalone servers are straightforward, except you have to deal with scale-out: you can't have multiple instances of a standalone server simultaneously, so you'll need to plan for this: take all but one out of the load balancer, or only launch mongod if you can acquire the Cloud Drive lock.
Replicasets are doable, as I demonstrated at MongoSV. However, I didn't cover the intricacies of graceful shutdown of a replicaset to ensure zero data loss.
You can run memcached as well - see David Aiken's post about this. Note: Now that the AppFabric Cache service is live, you should look into the pros/cons of using that over memcached. Cost-wise, AppFabric Cache should run much less, as you don't have to pay for role instances to host your cache. More info about AppFabric Cache here.
You now also have the option of running Redis in Windows Azure on Linux virtual machines ! In the case of Redis, this would allow you to use the "official" build instead of the "unsupported" Windows build ... For MongoDB, both choices seem equally valid (running on Linux virtual machines, "plain" Windows virtual machines, or using 10gen's package to run on "managed" VMs (Cloud Services).
FYI, there's now a Redis installer for Windows Azure available from MS Open Tech (my team). Here's a tutorial on how to use it: http://ossonazure.interoperabilitybridges.com/articles/how-to-deploy-redis-to-windows-azure-using-the-command-line-tool
[UPDATE] Azure now supports MongoDB and Redis.
http://azure.microsoft.com/blog/2014/04/22/announcing-new-mongodb-instances-on-microsoft-azure/
http://azure.microsoft.com/en-us/services/cache/
In the Azure Store you can now select Redis Cloud as an add-on.
Heres the Azure store description:
"Redis Cloud is a fully-managed cloud service for hosting and running Redis in a highly-available and scalable manner, with predictable and stable top performance. Tell us how much memory you need and get started instantly with your new Redis database."
PUBLISHED DATE 3/31/2014
You can access the store by selecting the "New" button in the Azure portal then "Store". I have yet to use it but it looks promising.
Azure now has a first-party Redis service, currently in preview:
http://azure.microsoft.com/en-us/documentation/articles/cache-dotnet-how-to-use-azure-redis-cache/