Capistrano deployment when db is not same machine as web server - deployment

From what I understand by default code will be deployed too all roles defined. Let's say you have a db on a different machine then your web server. You define roles for both the web server and the db then deploy. Currently my capistrano script is deploying the source to both machines. I want it to deploy source only to the web server.
How can this be done? The capistrano site mostly has examples of single machine architectures.

In my case I have separate web and app servers. In order to not deploy the code to the web server, I use the following:
role :web, "myappserver.com", :no_release => true
Many of the tasks in capistrano are qualified to not run when this variable is set.

Related

injected db credentials change when I deploy new app version to cloud

I deploy a web app to a local cloudfoundry environment. As a database service for my DEV environment I have chosen a Marketplace service google-cloudsql-postgres with the plan postgres-db-f1-micro. Using the Web UI I created an instance with the name myapp-test-database and mentioned it in the CF Manifest:
applications:
- name: myapp-test
services:
- myapp-test-database
At first, all is fine. I can even redeploy the existing artifact. However, when I build a new version of my app and push it to CF, the injected credentials are updated and the app can no longer access the tables:
PSQLException: ERROR: permission denied for table
The tables are still there, but they're owned by the previous user. They were automatically created by the ORM in the public schema.
While the -OLD application still exists I can retrieve the old username/password from the CF Web UI or $VCAP_SERVICES and drop the tables.
Is this all because of Rolling App Deployments? But then there should be a lot of complaints.
If you are strictly doing a cf push (or restart/restage), the broker isn't involved (Cloud Controller doesn't talk to it), and service credentials won't change.
The only action through cf commands that can modify your credentials is doing an unbind followed by a bind. Many, but not all, service brokers will throw away credentials on unbind and provide new, unique credentials for a bind. This is often desirable so that you can rotate credentials if credentials are compromised.
Where this can be a problem is if you have custom scripts or cf cli plugins to implement rolling deployments. Most tools like this will use two separate application instances, which means you'll have two separate bindings and two separate sets of credentials.
If you must have one set of credentials you can use a service key to work around this. Service keys are like bindings but not associated with an application in CloudFoundry.
The downside of the service key is that it's not automatically exposed to your application, like a binding, through $VCAP_SERVICES. To workaround this, you can pass the service key creds into a user-provided service and then bind that to your application, or you can pass them into your application through other environment variables, like DB_URL.
The other option is to switch away from using scripts and cf cli plugins for blue/green deployment and to use the support that is now built into Cloud Foundry. With cf cli version 7+, cf push has a --strategy option which can be set to rolling to perform a rolling deployment. This does not create multiple application instances and so there would only ever exist one service binding and one set of credentials.
Request a static username using the extra bind parameter "username":
cf bind-service my-app-test-CANDIDATE myapp-test-database -c "{\"username\":\"myuser\"}"
With cf7+ it's possible to add parameters to the manifest:
applications:
- name: myapp-test
services:
- name: myapp-test-database
parameters: { "username": "myuser" }
https://docs.cloudfoundry.org/devguide/services/application-binding.html#arbitrary-params-binding
Note: Arbitrary parameters are not supported in app manifests in cf CLI v6.x. Arbitrary parameters are supported in app manifests in cf CLI v7.0 and later.
However, I can't find the new syntax here: https://docs.cloudfoundry.org/devguide/deploy-apps/manifest-attributes.html#services-block . The syntax I use comes from some other SO question.

How to host multiple sites on Azure web app for Containers using docker compose

I would like to use a docker compose file to deploy multiple public end points for our Linux hosted site.
We already have a deployed site that has images stored on a private ACR and is hosted on an Azure App Service (using Web App for Containers). It is deployed via Azure DevOps and works well.
We would however, like to use the same site to host an additional component, an api so that we would then end up with these endpoints:
https://www.example.com - the main site
https://www.example.com/api - the api
We would like to avoid a second app service or a subdomain if possible. The architecture we prefer is to use the same https certificate and ports (443) to host the api. The web site and api share a similar code base.
In the standard app service world, we could easily have deployed a virtual directory to the main app which is simple enough.
This model though seems to be more complicated when using containers.
How can we go about this? I've already had a look at this documentation: https://learn.microsoft.com/en-us/azure/app-service/containers/tutorial-multi-container-app. However, in this example, the second container is a private one - which doesn't get exposed.
Should we use a docker compose file (example please)? Or alternatively, is there a way we can use the Azure DevOps task to deploy to a viritual directory in the way that i would like. This is the task we are using for the single container deployment:
https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/deploy/azure-rm-web-app-containers?view=azure-devops
For your requirements, the Web App For Container is also a type of Web App service, and as you see, it only can expose one container to the outside(the Internet) and others are private. So if you want to use the multi-containers Web App to deploy the images to access multiple endpoints such as the main site and the API site, then it's impossible to make your purpose come true.
According to the feature of the Web App that it only exposes one container to the outside, what you can do to achieve your purpose is that creates only one image and route to the endpoints yourself in the code or through a tool, such as the Nginx. Then deploy it to the Web App for Container. Only in this way, you can access multiple endpoints from only one App service..

What is the difference between an Endpoints service and AppEngine service when they have the same URL?

I have two services in Endpoints in GCP to host two APIs
They are
Service A
and
Service B
Service A's host is
projectid.appspot.com
Service B's host is
test-dot-projectid.appspot.com
When I deploy my app using gcloud app deploy Service A's test service in
appengine my app.yaml looks like this
runtime: go
env: flex
service: test
endpoints_api_service:
name: projectid.appspot.com
rollout_strategy: managed
handlers:
- url: .* #mandatory
secure: always #deprecated in flex environment
redirect_http_response_code: 301 #optional
script: _go_app #mandatory
From my understanding, the app has been deployed to Service A's URL
projectid.appspot.com but with the subdomain test so test-dot-projectid.appspot.com
However is this now not technically deploying to Service B on a default service i.e.
test-dot-projectid.appspot.com
Is this not interfering with deploying on service A with service test? What is the difference?
My understanding is: if service A is "projectid.appspot.com". Only "projectid.appspot.com" will be routed to A, not "test.projectid.appspot.com". So you can safely deploy sevice B with "test.projectid.appspot.com". But I am not sure. Have you tried it?
When having an application with multiple services, the service names must be configured and must be unique.
The default project is not required to have a service name. This is true when a single service for an application exists.
When having multiple services (Service A, Service B), the application must be first deployed to the default service. After which, each service must have their own app.yaml file.
It is recommended for each service to be configured with a unique service name to prevent miscommunication. If a unique service name is not configured for each service, by default, it will deploy to the default service’s URL. In your case, since the default service URL and Service A’s service URL are the same, it is causing a conflict.
To deploy to a specific services’ service name, you will need to specify the service name and its app.yaml file.
For more information, please refer to:
Planning your directory structure and app.yaml naming:
https://cloud.google.com/appengine/docs/flexible/go/configuration-files#directory_structure
How to deploy to multiple or specific service in a multi-service environment:
https://cloud.google.com/appengine/docs/flexible/go/testing-and-deploying-your-app#deploying_multiple_services
App Engine is made up of application resources that are made up of one or more services (previously known as modules). Each service is code independent and can run different runtimes and you deploy versions of that service. The URL is how your application is accessed either via a user request or an API call to the service.
For more information, please see https://cloud.google.com/appengine/docs/standard/python/microservices-on-app-engine#app_engine_services_as_microservices
Endpoints are part of the Cloud Endpoints Framework service offered by Google Cloud to manage, deploy, maintain and secure your API. In App Engine, Endpoint framework is only supported on Python 2.7 and Java runtime environments. It is a tool that allows you to generate REST API and client libraries for your application.
For more information, please see
https://cloud.google.com/endpoints/docs/frameworks/about-cloud-endpoints-frameworks

Desired state configuration

I have two web servers and one service server and a database server and all these servers are domain joined. And I have set my private build agent from VSTS from where I can build my artifacts and based on build configuration. And all my DEV,QA and STAGING environments are setup on those servers.
My problem is i am looking for a way using PowerShell Desired state configuration such a way that based on the environment artifacts (DEV,QA and STAGING) the scripts has to copy the artifacts to specific location on those "TWO web-servers" and ensure the website is configured correctly with all the required permissions where these artifacts are used to host IIS website and perform the delete and creation action of particular windows service on "SERVICE service" and should also perform the migration activities on "DATABASE server" for particular database. since I have separated database for each individual environment.
Any kind of help or suggestion would be appreciated. thank you.
My suggestions are:
Don't use DSC for deployment (i.e. deploy applications or databases)
Use DSC for configuration (e.g. install IIS)
Install the VSTS Agent on each server in Deployment Groups mode, running as a service with local administrator privileges
Use the IIS Deploy Tasks designed for Deployment Groups
Use the Powershell Task to manage the Windows Services (tip. help *-Service)

VSTS Deployment to a deployment group from a UNC share

I am using visualstudio.com Teams Services to build and deploy an ASP.NET website to two Azure VMs.
I have a build which on completion triggers a release to my two servers in a deployment group. When you configure a Deployment Group for Visual Studio Team Services you create an agent that by default runs as NT AUTHORITY\SYSTEM.
If I publish my build artifacts to Azure (the server option) then everything works fine and deployment succeeds to both my VMS. However when using a file-drop I get the following error:
The artifact directory does not exist:
\\MACHINE1\drop\RRStore\20170517.20. It can happen if the password of
the account NT AUTHORITY\SYSTEM is changed recently and is not updated
for the agent.
This is basically saying MACHINE2 cannot access \\MACHINE1\drop due to permissions. In windows I can bring up this folder just fine, but since the agent is running as NT AUTHORITY\SYSTEM it cannot access it.
I want to use a filedrop because my website is about 250MB (although in the meantime I am using the 'publish to server' option and deploying via team services.)
I am unclear how to give permissions to the file drop though as the agent is running as SYSTEM. I am running as a WORKGROUP and giving permissions to 'Everyone' does not seem to work.
What is the correct way to configure access to a VSTS drop folder so that the deployment agent can access it?
Few possible options:
Set up a domain (I tried doing this but then I need a new network interface and it sounds klunky)
Continue using teamservices to deploy the artifacts (or reduce the website size!)
Save to a storage account, but again I'm not sure how to configure that.
Run as a different user account
I have similar problems when deploying with VSTS. Instead I chose to:
Run VSTS agent on the deployment group VM as a local user with limited access.
Impersonate the account on the deployment group VM to test its access to the drop folder.
Save/cache a different credential to access the drop folder if applicable.
(So the sensitive information stays on the VM.)
The cached credentials can be a different local user account created on the drop server just for this purpose.
Grant the local user access to various parts of the file system explicitly to limit access permission of this VSTS agent service runner account.
This should work in most cases. In fact, this same way is used in my VSTS, Jenkins and TFS instances. This should prevent you from setting up a domain to solve this problem.
This may not be the best practice, but at least it should get you started in the right direction.