I have published my dnx/Web Service Fabric stateless service to local - it works. I publish to the cloud (carefully setting up the correct ports) and it does not start correctly. The error is the usual partition is below replica count
My suspicion is that dnx is not installed by default on the cluster VMs. Any way to get around that? I don't appear to get a login to those VMs so I can install asp.net 5 manually.
Found the issue - it was not DNX.
I set up a new cluster and was able to log in. There are 22304 error messages saying that my second non-dnx stateless service which is in the same application package is causing this event:
.NET Runtime version : 4.0.30319.34014 - This application could not be started.This application requires one of the following versions of the .NET Framework:
.NETFramework,Version=v4.5.2
Do you want to install this .NET Framework version now?
I'll figure out how to target correctly.
Related
I have created an app with spring batch(with partition) application taking example of this https://github.com/mminella/S3JDBC. My app is reading some files from object store and doing some processing and writing back to object the store. My app with local partition works fine in my machine.
I changed the maven, to run in cloud foundry , did change for deployer partition handler and step execution listener and deploying on pcf.
But while trying to push and run the app on pcf , I am getting an issue :
Failing URI /v2/info. I tried to log the error found that there is one call to my app e.g https://mypcf.com:443/v2/info and after that it gives the error. I cant provide full logs because of some restrictions. So I want to know :
To deploy a spring batch in pcf(is there any extra configuration
needed except the maven dependency and code changes for
deployerpartitionhandler and stepexecutionlistener and #cloudtask):
org.springframework.cloud spring-cloud-deployer-cloudfoundry
1.1.0.M1
Is it mandatory to have a separate data base service like my-sql for the partition job. Cant I use H2(the default one, if I
don't configure anything)?
Do I need to do any configuration in pcf to support running multiple partitions ?
As I am running remote partitioning , can I run that app on local STS or Intellij(not on PCF-DEV)so that it will run my app in
pcf(remote) and launch the workers.(Sorry for the stupid question ,
I am new to PCF).
Thanks for checking out my example. To answer your questions:
You should be able to use the latest deployer release (instead of that rather old version).
Yes. Partitioned steps need to all be able to share the same job repository data store so an in memory database like H2 will not work for that use case.
Besides defining your datasource, that's all that is required to live in PCF. That being said, there are other things that need to be configured, but you can use other mechanisms to do so (Spring Cloud Config Server, application.properties/yml, etc).
Yes, you should be able to run the master locally and have it deploy the workers onto PCF if you're using the CF deployer.
I have Azure Service Fabric Application which consumes RabbitMQ queue and makes some calculations using data from sql database.
Connection strings for rabbit and sql stored in ApplicationManifest.xml via Parameters and then changing by different publishing profiles (I have different xml for cloud or local deployment)
Now I want deploy another instance of my application for another db/rabbitmq.
I suppose I must create another publishing profile, change config package version (e.g 1.1.0) and register new application type to cluster. But I mustn't upgrade existing app. Then I should create another app with version 1.1.0.
So there will be two apps in my cluster
App for db2/rabbit2 ver 1.1.0
App for db1/rabbit1 ver 1.0.0
Is it appropriate scenario for having 2 apps with different connection strings?
One approach would be to only have one Application Type and then instantiate multiple Application instances of that type; each of those applications can consume a different db/rabbitmq. During application creation, you can pass different connection strings (db/rabbitmq) as parameters.
I am trying to copy an app to the service fabric image store.
I am not able to copy the application via VS or Powershell (probably because of the fabric:/System/ImageStoreService being in Error state). The operation times out when done using Visual Studio and when done using Powershell - it just stays stuck indefinitely.
I don't know how to approach services that are not running on the Service Fabric Cluster. I have other services failing on the cluster as well - this is a new test cluster created using the Azure portal yesterday (see attached screenshot).
Error event: SourceId='System.FM', Property='State'. Partition is below target replica or instance count.
ImageStoreService 3 3 00000000-0000-0000-0000-000000003000
N/P InBuild _nodr_0 131636712421204228
(Showing 1 out of 1 replicas. Total available replicas: 0)
I'm trying to upgrade a Service Fabric application with a mix of stateful and stateless actors. I did some refactoring and so removed some actors I didn't need any more. Now, when I try to upgrade the application, I get the following error:
Services must be explicitly deleted before removing their Service Types.
After thinking about it a little bit, I think I understand the trouble that could come from removed services and upgrades, but then what's the correct way to do this?
You need to remove the service instances before you can upgrade to a version that doesn't contain the removed service package. Either:
In SF Explorer, navigate to the service and click Actions > Delete Service
In PowerShell:
Connect-ServiceFabricCluster
Remove-ServiceFabricService -ServiceName fabric:/MyApp/MyService
DO BE CAREFUL - If you're deleting a stateful service you'll lose all its data. Always be sure to have a periodic backup of production data.
I am trying to understand if rolling deployment of application is possible in Weblogic. Weblogic version is 12.1.2.0.0.
"By rolling deployment I mean, deploying the new version to a single node or a child cluster, by removing the node or child cluster from targets of existing deployment. This is to make sure that the current version of deployment on existing cluster is still functioning, probably with degraded performance, due to removing a node/a child cluster.
The operation team can verify if the intended change has worked." Once verified then the target for the deployment can be updated to add rest of the child cluster(s).
I am aware of the -redeploy option available in Weblogic, which mean no outage, but it does the deployment to the same target as the original deployment.
java weblogic.Deployer -adminurl http://localhost:8802
-username weblogic -password weblogic -name VersionedApp
-targets adminServer -redeploy -source
C:/tmp/VersionedApp2 -appversion version2
However not sure how will it behave, if there is an active DB in the backend.
Any insight on this is highly appreciated.
You should look at -adminmode atribute for deploying. In Oracle Docs: http://docs.oracle.com/middleware/1213/wls/DEPGD/wldeployer.htm#DEPGD318
You need first enable admin port, and than application which is deployed in adminmode can be only accessed only by adminport (context is visible at admin port not at production one).
Once tests are ok, you can promote application from "admin" state to "active" one by using "-start" parameter in weblogic.Deployer.