I just tried to publish an out-of-the-box stateless webapi service to an Azure SF cluster.
Locally it runs fine, but once published, I can't seem to reach http://mysfcluster.westeurope.cloudapp.azure.com:8820/api/values
Do I need to declare the port in an other place then in the ServiceManifest.xml?
You have to declare it in the "Custom endpoints" section when creatin the node type.
https://azure.microsoft.com/en-us/documentation/articles/service-fabric-connect-and-communicate-with-services/
Related
I am using a Logic App for which I need to create a custom connector. This connector depends on a web service, for which I am trying to add using wsdl definition.
Now If I provide the url, it needs authentication, which I am not able to provide via this UI. I can see the parameters can be provided while using it in the logic app. However it fails to pull the services and hence not creating the definition for the connector
I tried downloading the wsdl and adding here as a file, but the schema have xs import tags, because of which its failing again. And as per this answer, I can not replace it with actual schema.
<xs:import namespace="http://some.name/" schemaLocation="./path/to/it.xsd"/>
Is there a way that I do not need to provide the custom connector definition manually and make it work using wsdl, as it contains a lot of endpoints and it would be too much to add all actions and triggers manually. Plus it would be also reference for me if needed in future for such scenario
You may try this if the services are accessible over the internet, then you call service endpoint over HTTP or HTTPS from azure logic apps. This article will help you with details steps to be followed: https://learn.microsoft.com/en-us/azure/connectors/connectors-native-http
If it is not accessible over the internet then this article will help with step by step process: https://learn.microsoft.com/en-us/azure/logic-apps/logic-apps-gateway-connection
Before you can access data sources on premises from your logic apps, you need to create an Azure resource after you install the on-premises data gateway on a local computer. Your logic apps then use this Azure gateway resource in the triggers and actions provided by the on-premises connectors that are available for Azure Logic Apps.
Also check this
I would like to use a docker compose file to deploy multiple public end points for our Linux hosted site.
We already have a deployed site that has images stored on a private ACR and is hosted on an Azure App Service (using Web App for Containers). It is deployed via Azure DevOps and works well.
We would however, like to use the same site to host an additional component, an api so that we would then end up with these endpoints:
https://www.example.com - the main site
https://www.example.com/api - the api
We would like to avoid a second app service or a subdomain if possible. The architecture we prefer is to use the same https certificate and ports (443) to host the api. The web site and api share a similar code base.
In the standard app service world, we could easily have deployed a virtual directory to the main app which is simple enough.
This model though seems to be more complicated when using containers.
How can we go about this? I've already had a look at this documentation: https://learn.microsoft.com/en-us/azure/app-service/containers/tutorial-multi-container-app. However, in this example, the second container is a private one - which doesn't get exposed.
Should we use a docker compose file (example please)? Or alternatively, is there a way we can use the Azure DevOps task to deploy to a viritual directory in the way that i would like. This is the task we are using for the single container deployment:
https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/azure-rm-web-app-containers?view=azure-devops
For your requirements, the Web App For Container is also a type of Web App service, and as you see, it only can expose one container to the outside(the Internet) and others are private. So if you want to use the multi-containers Web App to deploy the images to access multiple endpoints such as the main site and the API site, then it's impossible to make your purpose come true.
According to the feature of the Web App that it only exposes one container to the outside, what you can do to achieve your purpose is that creates only one image and route to the endpoints yourself in the code or through a tool, such as the Nginx. Then deploy it to the Web App for Container. Only in this way, you can access multiple endpoints from only one App service..
I have two services in Endpoints in GCP to host two APIs
They are
Service A
and
Service B
Service A's host is
projectid.appspot.com
Service B's host is
test-dot-projectid.appspot.com
When I deploy my app using gcloud app deploy Service A's test service in
appengine my app.yaml looks like this
runtime: go
env: flex
service: test
endpoints_api_service:
name: projectid.appspot.com
rollout_strategy: managed
handlers:
- url: .* #mandatory
secure: always #deprecated in flex environment
redirect_http_response_code: 301 #optional
script: _go_app #mandatory
From my understanding, the app has been deployed to Service A's URL
projectid.appspot.com but with the subdomain test so test-dot-projectid.appspot.com
However is this now not technically deploying to Service B on a default service i.e.
test-dot-projectid.appspot.com
Is this not interfering with deploying on service A with service test? What is the difference?
My understanding is: if service A is "projectid.appspot.com". Only "projectid.appspot.com" will be routed to A, not "test.projectid.appspot.com". So you can safely deploy sevice B with "test.projectid.appspot.com". But I am not sure. Have you tried it?
When having an application with multiple services, the service names must be configured and must be unique.
The default project is not required to have a service name. This is true when a single service for an application exists.
When having multiple services (Service A, Service B), the application must be first deployed to the default service. After which, each service must have their own app.yaml file.
It is recommended for each service to be configured with a unique service name to prevent miscommunication. If a unique service name is not configured for each service, by default, it will deploy to the default service’s URL. In your case, since the default service URL and Service A’s service URL are the same, it is causing a conflict.
To deploy to a specific services’ service name, you will need to specify the service name and its app.yaml file.
For more information, please refer to:
Planning your directory structure and app.yaml naming:
https://cloud.google.com/appengine/docs/flexible/go/configuration-files#directory_structure
How to deploy to multiple or specific service in a multi-service environment:
https://cloud.google.com/appengine/docs/flexible/go/testing-and-deploying-your-app#deploying_multiple_services
App Engine is made up of application resources that are made up of one or more services (previously known as modules). Each service is code independent and can run different runtimes and you deploy versions of that service. The URL is how your application is accessed either via a user request or an API call to the service.
For more information, please see https://cloud.google.com/appengine/docs/standard/python/microservices-on-app-engine#app_engine_services_as_microservices
Endpoints are part of the Cloud Endpoints Framework service offered by Google Cloud to manage, deploy, maintain and secure your API. In App Engine, Endpoint framework is only supported on Python 2.7 and Java runtime environments. It is a tool that allows you to generate REST API and client libraries for your application.
For more information, please see
https://cloud.google.com/endpoints/docs/frameworks/about-cloud-endpoints-frameworks
We have created two APIs and deployed them to a Service Fabric Cluster, which exposes them as https://[clusterurl]:8100 and https://[ClusterURL]>:8101.
Now we want to expose these APIs via API Management Service, and we couldn't find any easy way to do so. There is one article at https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-tutorial-deploy-api-management, but it's really very hard to understand and relate it with this SDK.
We managed to create an API Management Service instance and also to create a blank API (or import through Swagger) using the SDK. But we don't know how to import the Service Fabric API.
And we could create an API Management BackEnd pointing to the Service Fabric app, but then we couldn't find any way to bind this BackEnd to any API created in the API Management Service.
Any help, sample, and/or pointing in right direction is greatly appreciated.
For service fabric integration to work you need:
VNET that includes both your SF cluster and APIM instance.
Backend entity: https://learn.microsoft.com/en-us/rest/api/apimanagement/backend/createorupdate it will let APIM know where your cluster is and provide it with necessary credentials to make calls.
set-backend-service policy: https://learn.microsoft.com/en-us/azure/api-management/api-management-transformation-policies#SetBackendService It's usually placed in inbound section of API that needs to talk to SF. You should omit "base-url" attribute, use "backend-id" to specify id of backend entity created in previous point, and other "sf-*" to configure how exactly call should be made.
I'm using Fabric v0.6 on Bluemix and composer-ui on my local machine. I was able to make my model and logic files and deployed them to my Blockchain network on Bluemix. Now I want to invoke the chaincode I deployed with composer from an app that is already running on Bluemix (node.js), not from the composer-ui. How would I approach this?
I have seen a sample app here: https://github.com/hyperledger/composer-sample-applications/tree/master/packages/getting-started
But it requires this configuration file: https://github.com/hyperledger/composer-sample-applications/blob/master/packages/getting-started/config/default.json
And that configuration file specifies the connectionProfile, which I guess is the connection profile I created on composer-ui to connect to my Blockchain service on Bluemix.
Do I need to have Fabric Composer running in order to invoke the chaincode? Or is there anyway to invoke my chaincode completely independent from the composer runtime?
Couple of options:
Use the composer-rest-server and write your front-end application against a domain specific REST API
Pass the connection profile information into the Composer JS composer-client API using an environment variable. See: https://github.com/hyperledger/composer/issues/602