Share multiple api gateway in one serverless compose service - aws-api-gateway

I have a project with serverless framework compose following this architecture
/services
--- infra
--- service-a
--- service-b
The 'infra' service create and share two api-gateway (a private and public one), but in each service a can use only one. In each service a have a public lambda for creating an object (public api) and a lambda to approuve the object by an admin (private api).
I need this specific pattern cause it's a medical project.
How I can do select an api gateway for each lambda ?

there's no "good" way to do it if you need to have two different APIGWs in the same Serverless service. The best approach would be to separate it into more services, e.g. service-a-public, service-a-private, and so on.
If you really need to use a single service that uses two separate APIGWs, then you need to write the integration as raw CloudFormation.

Related

What is the difference between an Endpoints service and AppEngine service when they have the same URL?

I have two services in Endpoints in GCP to host two APIs
They are
Service A
and
Service B
Service A's host is
projectid.appspot.com
Service B's host is
test-dot-projectid.appspot.com
When I deploy my app using gcloud app deploy Service A's test service in
appengine my app.yaml looks like this
runtime: go
env: flex
service: test
endpoints_api_service:
name: projectid.appspot.com
rollout_strategy: managed
handlers:
- url: .* #mandatory
secure: always #deprecated in flex environment
redirect_http_response_code: 301 #optional
script: _go_app #mandatory
From my understanding, the app has been deployed to Service A's URL
projectid.appspot.com but with the subdomain test so test-dot-projectid.appspot.com
However is this now not technically deploying to Service B on a default service i.e.
test-dot-projectid.appspot.com
Is this not interfering with deploying on service A with service test? What is the difference?
My understanding is: if service A is "projectid.appspot.com". Only "projectid.appspot.com" will be routed to A, not "test.projectid.appspot.com". So you can safely deploy sevice B with "test.projectid.appspot.com". But I am not sure. Have you tried it?
When having an application with multiple services, the service names must be configured and must be unique.
The default project is not required to have a service name. This is true when a single service for an application exists.
When having multiple services (Service A, Service B), the application must be first deployed to the default service. After which, each service must have their own app.yaml file.
It is recommended for each service to be configured with a unique service name to prevent miscommunication. If a unique service name is not configured for each service, by default, it will deploy to the default service’s URL. In your case, since the default service URL and Service A’s service URL are the same, it is causing a conflict.
To deploy to a specific services’ service name, you will need to specify the service name and its app.yaml file.
For more information, please refer to:
Planning your directory structure and app.yaml naming:
https://cloud.google.com/appengine/docs/flexible/go/configuration-files#directory_structure
How to deploy to multiple or specific service in a multi-service environment:
https://cloud.google.com/appengine/docs/flexible/go/testing-and-deploying-your-app#deploying_multiple_services
App Engine is made up of application resources that are made up of one or more services (previously known as modules). Each service is code independent and can run different runtimes and you deploy versions of that service. The URL is how your application is accessed either via a user request or an API call to the service.
For more information, please see https://cloud.google.com/appengine/docs/standard/python/microservices-on-app-engine#app_engine_services_as_microservices
Endpoints are part of the Cloud Endpoints Framework service offered by Google Cloud to manage, deploy, maintain and secure your API. In App Engine, Endpoint framework is only supported on Python 2.7 and Java runtime environments. It is a tool that allows you to generate REST API and client libraries for your application.
For more information, please see
https://cloud.google.com/endpoints/docs/frameworks/about-cloud-endpoints-frameworks

API Management Service : How to import Service Fabric Cluster APIs?

We have created two APIs and deployed them to a Service Fabric Cluster, which exposes them as https://[clusterurl]:8100 and https://[ClusterURL]>:8101.
Now we want to expose these APIs via API Management Service, and we couldn't find any easy way to do so. There is one article at https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-tutorial-deploy-api-management, but it's really very hard to understand and relate it with this SDK.
We managed to create an API Management Service instance and also to create a blank API (or import through Swagger) using the SDK. But we don't know how to import the Service Fabric API.
And we could create an API Management BackEnd pointing to the Service Fabric app, but then we couldn't find any way to bind this BackEnd to any API created in the API Management Service.
Any help, sample, and/or pointing in right direction is greatly appreciated.
For service fabric integration to work you need:
VNET that includes both your SF cluster and APIM instance.
Backend entity: https://learn.microsoft.com/en-us/rest/api/apimanagement/backend/createorupdate it will let APIM know where your cluster is and provide it with necessary credentials to make calls.
set-backend-service policy: https://learn.microsoft.com/en-us/azure/api-management/api-management-transformation-policies#SetBackendService It's usually placed in inbound section of API that needs to talk to SF. You should omit "base-url" attribute, use "backend-id" to specify id of backend entity created in previous point, and other "sf-*" to configure how exactly call should be made.

How to implement just in time access for a deployment server?

BACKGROUND
We are about to set up a deployment server that will be used to manage Azure resources. The deployment server will run pre-defined PowerShell scripts and deploy ARM-templates.
This article describes how to use service principals and keys vaults so that the application that runs inside the deployment server securely can execute deployment scripts.
PROBLEM
Frequently, the deployment server will be updated with scripts, new pipelines, different types of configuration, code snippets, templates etc. When changes are made on the deployment server, we do not want the secrets to being exposed in any way.
A JUST IN TIME APPROACH – CUSTOM ACCESS KEY API
The functionality we are looking for can possibly be implemented with a custom access key API with the flowing workflow:
In a service request portal, a deployment ticket is signed by an
approver
The deployment server receives the signed deployment ticket
The deployment server sends the signed ticket to a custom access
key API and receives a temporary service principal and access key
The deployment server executes scripts (with the temporary service
principal)
The temporary service principal and access key is automatically
removed
WHY A CUSTOM ACCESS KEY API?
The custom access key API adds the following capabilities:
By comparison to a deployment server, the API has a smaller footprint and we believe that updates to the service will be rare and can be done in a very controlled manner.
The API can give access to the deployment server based on the exact need (subscription, resource group, etc)
The API can use digital signatures to verify the original approver of the ticket
RECOMMENDED APPROACH?
What is the recommended approach to implement just in time access for a deployment server?

Managing multiple REST APIs in Azure API Management

I am building REST APIs with MicroServices, which means I have different services for providing different resources. Suppose I have below services:
ServiceA is providing resources resourcesA and resourcesA1 with below URLs
https://my-internal-endpoint-for-serviceA/resourcesA
https://my-internal-endpoint-for-serviceA/resourcesA1
ServiceB is providing resources resourcesB and resourcesB1 with below URLs
https://my-internal-endpoint-for-serviceB/resourcesB
https://my-internal-endpoint-for-serviceB/resourcesB1
Now, I want to manage them in Azure API Management. To publish them (by importing the Swagger document from services), API Management portal need an API path for publishing. So, serviceA and serviceB can be published as below:
https://my-api-azure-api.net/serviceA/resourcesA
https://my-api-azure-api.net/serviceA/resourcesA1
https://my-api-azure-api.net/serviceB/resourcesB
https://my-api-azure-api.net/serviceB/resourcesB1
But to be more resources based API management, I am expecting the published APIs to be more like below:
https://my-api-azure-api.net/resourcesA
https://my-api-azure-api.net/resourcesA1
https://my-api-azure-api.net/resourcesB
https://my-api-azure-api.net/resourcesB1
Unfortunately, API management does not allow me for pubshing 2 APIs (serviceA and serviceB) to same path (root path in this case). I don't want to put the service name (or something equivalent) in the URL path as the service name is something duplication of the resource name provided by it. How do I workaround this?
The Azure API Management Policies can help you here, in particular the control flow with the ability to forward requests. The documentation is here: https://learn.microsoft.com/en-us/azure/api-management/api-management-policy-reference
I would approach this by setting up the resources as a single API, by adding one via swagger and then adding in the other services to this one to make it complete (as you want it to appear as a single complete service). Once this is in place you are then free to apply the policies.
Note: you may have to expand the resource path in the following way
https://my-api-azure-api.net/mynewservice/resourcesA
https://my-api-azure-api.net/mynewservice/resourcesA1
https://my-api-azure-api.net/mynewservice/resourcesB
https://my-api-azure-api.net/mynewservice/resourcesB1
Maybe the answer to this question can help:
How to chain APIs using Azure API management
You can use the same policy to map several operations of the same API in API Management to different backend APIs.
But in general all APIs are exposed as <myGateway>.azure-api.net/<myApi>/<myOperation>.

osgi - multiple instances of a service

How can I create multiple instances of a bundle that consumes an external webservice?
An external webservice requires clients to logon before using the services. I have multiple accounts. The problem is I want to be able to add multiple instances; one for each account. Each instance is an osgi declarative service that consumes the external service.
Do I have to deploy a new bundle for each account? This does not feel like the right way to solve this.
What you need is multiple instances of an OSGi component or service, not multiple instances of a bundle.
I'd recommend a service factory, where each OSGi config that you create (account parameters in your case) for your service causes a new instance of a service to be created.
Neil Bartlett's tutorial at http://njbartlett.name/2010/07/19/factory-components-in-ds.html looks like a good starting point for that.
Is that bundle under your control - can you refactor it ?
If yes, it might be useful to expose a client factory service, rather than client service itself.
Then each instance can log into a different account.