I currently have a service name matrix-calculator
My service is deployed as
http://matrix-calcualtor.company.prod.com/api/v1/calculate
Im not sure if this is the best way to have this url, as i would like to have the following as it allows local dev development when running multiple services
http://matrix-calcualtor.company.prod.com/matrix-calculator/api/v1/matrix
Thoughts
Related
I would like to use a docker compose file to deploy multiple public end points for our Linux hosted site.
We already have a deployed site that has images stored on a private ACR and is hosted on an Azure App Service (using Web App for Containers). It is deployed via Azure DevOps and works well.
We would however, like to use the same site to host an additional component, an api so that we would then end up with these endpoints:
https://www.example.com - the main site
https://www.example.com/api - the api
We would like to avoid a second app service or a subdomain if possible. The architecture we prefer is to use the same https certificate and ports (443) to host the api. The web site and api share a similar code base.
In the standard app service world, we could easily have deployed a virtual directory to the main app which is simple enough.
This model though seems to be more complicated when using containers.
How can we go about this? I've already had a look at this documentation: https://learn.microsoft.com/en-us/azure/app-service/containers/tutorial-multi-container-app. However, in this example, the second container is a private one - which doesn't get exposed.
Should we use a docker compose file (example please)? Or alternatively, is there a way we can use the Azure DevOps task to deploy to a viritual directory in the way that i would like. This is the task we are using for the single container deployment:
https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/azure-rm-web-app-containers?view=azure-devops
For your requirements, the Web App For Container is also a type of Web App service, and as you see, it only can expose one container to the outside(the Internet) and others are private. So if you want to use the multi-containers Web App to deploy the images to access multiple endpoints such as the main site and the API site, then it's impossible to make your purpose come true.
According to the feature of the Web App that it only exposes one container to the outside, what you can do to achieve your purpose is that creates only one image and route to the endpoints yourself in the code or through a tool, such as the Nginx. Then deploy it to the Web App for Container. Only in this way, you can access multiple endpoints from only one App service..
I have two services in Endpoints in GCP to host two APIs
They are
Service A
and
Service B
Service A's host is
projectid.appspot.com
Service B's host is
test-dot-projectid.appspot.com
When I deploy my app using gcloud app deploy Service A's test service in
appengine my app.yaml looks like this
runtime: go
env: flex
service: test
endpoints_api_service:
name: projectid.appspot.com
rollout_strategy: managed
handlers:
- url: .* #mandatory
secure: always #deprecated in flex environment
redirect_http_response_code: 301 #optional
script: _go_app #mandatory
From my understanding, the app has been deployed to Service A's URL
projectid.appspot.com but with the subdomain test so test-dot-projectid.appspot.com
However is this now not technically deploying to Service B on a default service i.e.
test-dot-projectid.appspot.com
Is this not interfering with deploying on service A with service test? What is the difference?
My understanding is: if service A is "projectid.appspot.com". Only "projectid.appspot.com" will be routed to A, not "test.projectid.appspot.com". So you can safely deploy sevice B with "test.projectid.appspot.com". But I am not sure. Have you tried it?
When having an application with multiple services, the service names must be configured and must be unique.
The default project is not required to have a service name. This is true when a single service for an application exists.
When having multiple services (Service A, Service B), the application must be first deployed to the default service. After which, each service must have their own app.yaml file.
It is recommended for each service to be configured with a unique service name to prevent miscommunication. If a unique service name is not configured for each service, by default, it will deploy to the default service’s URL. In your case, since the default service URL and Service A’s service URL are the same, it is causing a conflict.
To deploy to a specific services’ service name, you will need to specify the service name and its app.yaml file.
For more information, please refer to:
Planning your directory structure and app.yaml naming:
https://cloud.google.com/appengine/docs/flexible/go/configuration-files#directory_structure
How to deploy to multiple or specific service in a multi-service environment:
https://cloud.google.com/appengine/docs/flexible/go/testing-and-deploying-your-app#deploying_multiple_services
App Engine is made up of application resources that are made up of one or more services (previously known as modules). Each service is code independent and can run different runtimes and you deploy versions of that service. The URL is how your application is accessed either via a user request or an API call to the service.
For more information, please see https://cloud.google.com/appengine/docs/standard/python/microservices-on-app-engine#app_engine_services_as_microservices
Endpoints are part of the Cloud Endpoints Framework service offered by Google Cloud to manage, deploy, maintain and secure your API. In App Engine, Endpoint framework is only supported on Python 2.7 and Java runtime environments. It is a tool that allows you to generate REST API and client libraries for your application.
For more information, please see
https://cloud.google.com/endpoints/docs/frameworks/about-cloud-endpoints-frameworks
If I want to access the REST API of the openshift master server from anywhere in my company I use https://master.test04.otc-test.company.com:8443 which works just fine.
Now I'm writing an admin application that is accessing the REST API and is deployed in this openshift cluster. Is there a generic name or environment variable in openshift to get the hostname of the master server?
Background: My admin application will be deployed on multiple openshift clusters which do not have the same URL. It would be very handy to have them autodiscover the hostname of the current master server instead of configuring this value for every deployment.
Use environment variables:
https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT
In the container, unless service account details are not being mounted, you can also access the directory:
/var/run/secrets/kubernetes.io/serviceaccount
In this you can then find a token file which contains the access token for the service account the container runs as. This means you can create a separate service account for the application in that project, and use RBAC to control what it can do via the REST API.
That same directory also has a namespace file so you know what project the container is running in, and files with certificates to use when accessing the REST API over a secure connection.
This is the recommended approach, rather than trying to pass an access token to your application through its configuration.
Note that in OpenShift 4, if you need to access the OAuth server endpoint, it is on a separate URL to what the REST API is. In 3.X, they were on the same URL.
In 4.0, you can access the path /.well-known/oauth-authorization-server on the REST API URL, to get information about the separate OAuth server endpoint.
For additional information on giving REST API access to an application via a service account, see:
https://cookbook.openshift.org/users-and-role-based-access-control/how-do-i-enable-rest-api-access-for-an-application.html
Note that that page currently says you can use https://openshift.default.svc.cluster.local as URL, but this doesn't work in OpenShift 4.
We have created two APIs and deployed them to a Service Fabric Cluster, which exposes them as https://[clusterurl]:8100 and https://[ClusterURL]>:8101.
Now we want to expose these APIs via API Management Service, and we couldn't find any easy way to do so. There is one article at https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-tutorial-deploy-api-management, but it's really very hard to understand and relate it with this SDK.
We managed to create an API Management Service instance and also to create a blank API (or import through Swagger) using the SDK. But we don't know how to import the Service Fabric API.
And we could create an API Management BackEnd pointing to the Service Fabric app, but then we couldn't find any way to bind this BackEnd to any API created in the API Management Service.
Any help, sample, and/or pointing in right direction is greatly appreciated.
For service fabric integration to work you need:
VNET that includes both your SF cluster and APIM instance.
Backend entity: https://learn.microsoft.com/en-us/rest/api/apimanagement/backend/createorupdate it will let APIM know where your cluster is and provide it with necessary credentials to make calls.
set-backend-service policy: https://learn.microsoft.com/en-us/azure/api-management/api-management-transformation-policies#SetBackendService It's usually placed in inbound section of API that needs to talk to SF. You should omit "base-url" attribute, use "backend-id" to specify id of backend entity created in previous point, and other "sf-*" to configure how exactly call should be made.
I'm using Fabric v0.6 on Bluemix and composer-ui on my local machine. I was able to make my model and logic files and deployed them to my Blockchain network on Bluemix. Now I want to invoke the chaincode I deployed with composer from an app that is already running on Bluemix (node.js), not from the composer-ui. How would I approach this?
I have seen a sample app here: https://github.com/hyperledger/composer-sample-applications/tree/master/packages/getting-started
But it requires this configuration file: https://github.com/hyperledger/composer-sample-applications/blob/master/packages/getting-started/config/default.json
And that configuration file specifies the connectionProfile, which I guess is the connection profile I created on composer-ui to connect to my Blockchain service on Bluemix.
Do I need to have Fabric Composer running in order to invoke the chaincode? Or is there anyway to invoke my chaincode completely independent from the composer runtime?
Couple of options:
Use the composer-rest-server and write your front-end application against a domain specific REST API
Pass the connection profile information into the Composer JS composer-client API using an environment variable. See: https://github.com/hyperledger/composer/issues/602