How to connect build agent to PostgreSQL database - azure-devops

My integration tests for my asp.net core application require a connection to a PostgreSQL database. In my deployment pipeline I only want to deploy if my integration tests pass.
How do I supply a working connection string inside the Microsoft build agent?
I looked under service connections and couldn't see anything related to a database.

If you are using Microsoft hosted agent, then your database need to be accessible from internet.
Otherwise, you need to it on self-hosted agent that can access your database.
I assume the default connectionstring is in appsettings.json, you could store the actual database connectionstring to a secret variable, then update appsettings.json file with that variable value through some task (e.g. Set Json Property) or do it programming (e.g. powershell script) before running web app and starting test during build.
If you can use any PostgreSQL database, you can use service container with a docker image that has PostgreSQL database (e.g. postgres).
For classical pipeline, you could call docker command run the image.

I would recommend you to use runsettings which you can override in task. In that way you will keep your connection string away of source control. Please check this link. And in terms of service connection, you don't need any service connection, only what you need is proper connection string.
Since I don't know how you connect to your DB in details I can't give you more info. If you provide example how you already connect to database I can try to provide a better answer.

Related

Change the Database Address of an existing Meteor App running on a Ubuntu Cloud Server

I have a Meteor App running on a Ubuntu Droplet on Digital Ocean (your basic virtual machine). This app was written by a company that went out of business and left us with nothing.
The database is a MongoDB currently running on IBM Compose. Compose is shutting down in a month and the Database needs to be moved and our App needs to connect to the new database.
I had no issues exporting and creating a MongoDB with all the data on a different server.
I cannot for the life of me figure out where on the live Meteor App server I would change the address of the database connection. There is no simple top level config file where I can change this?? Does anyone out there know where I would do this?
I realize that in the long term I will need to either rewrite or deprecate this aging app, but in the short term the company relies on it and IBM decided to just shut down their Compose service so please help!!
There is mostly the MONGO_URL and MONGO_OPLOG_URL that are configured as environment variable: https://docs.meteor.com/environment-variables.html#MONGO-OPLOG-URL
Now you don't set these within the code but during deployment. If you are running on localhost and want to connect to the external MongoDb you can simply use:
$ MONGO_URL="mongodb://user:password#myserver.com:port" meteor
If you want to deploy the app, you should stick with the docs: https://galaxy-guide.meteor.com/mongodb.html#authentication
If you use MUP then configure the mongo appropriately: https://meteor-up.com/docs.html#mongodb
Edit: If your app was previously deployed using MUP you can try to restore the environment variables from /opt/app-name/config (where app-name is the name of your app) which contains env.list (including all environment variables; thus your MONGO_URL) and start.sh which you can use to recreate the mup.js config.

How to create and populate the wso2 db tables when using docker-compose?

I'm trying to create a test deployment of the wso2 products using docker-compose, starting with the API manager. I'm using the Dockerfile https://hub.docker.com/r/wso2/wso2am. I have configured my master datasource to point to a PostgreSQL server that is managed by docker-compose.
I have an initial script that is run in the pgsql container that creates the database and db user.
I dont know how to get the wso2 API Manager to create and populate the tables. The wso2 documentation say to use the "-DSETUP" when running the product but I dont have a access to this.
Does anyone know how to do this? Or so I need to build and configure my own Docker images?
-DSetup is now deprecated and it is not available in the latest product releases. Please check the docker-compose resources of WSO2 API Manager in [1].
In this docker-compose example, it has used Mysql and Mysql supports giving sql files for database creation when starting the container. I guess in Postgresql, you can do the same. Otherwise, need to automate the database table creation.
[1] - https://github.com/wso2/docker-apim/tree/v3.0.0.1/docker-compose/apim-with-analytics

.NET Core Entity Framework on AWS

I've got a web application written in .NET Core using Entity Framework to connect to a SQL Server database. Everything works fine locally. The staging/production environment for the app is AWS Elastic Beanstalk, and I seem to be stuck on how to get the app to connect to the database there.
I've created an RDS (SQL Server) database underneath the Elastic Beanstalk app.
Connection string:
user id=[USER];password=[PASSWORD];Data Source=[SERVER].rds.amazonaws.com:1433;Initial Catalog=Development
Creating the DbContext:
services.AddDbContext<MyDbContext>(o =>
o.UseSqlServer(Configuration["Db:ConnectionString"]));
App fails to start, based on the following line in Startup.cs:
dbContext.Database.EnsureCreated();
You have to troubleshoot step by step as below procedure:
Db Connection string is working or not? Better to use with other app and simple doing the Db Connection testing. It could be possible that firewall block your port 1433.
As per your codes, .NET Core Framework will crate a database by code first approach. So that, you have to make sure, your IIS Application Pool user account has write access to SQL Database. Most probability it could be your problem.

Is there any other way other than MLab/ObjectRocket to migrate the app from parse

I have created an EC2 AWS instance with rocksDb engine now I am trying to migrate the parse application to this Instance as instructed here
https://gyminutesapp.wordpress.com/2016/04/28/migrating-parse-mongodb-to-mongorocks-on-aws
Is it compulsory that I have to do it via MLab/ObjectRockect or is there any other way??
Can Anyone help me out with the further steps, How to connect to parseServer and migrate the data?
You can move to any mongodb database. You can setup any server and install mongodb, allow remote access, and push your data from parse.com to your own mongodb database. This is the first step in parse migration process.
Below are the other steps to take care :
1. Host the open source parse server, configure it to connect to your database.
2. Fix your cloud code, minor changes might be required.
3. Replace the cloud modules that you are using with npm module counterpart.
deploy !!!

Reading from a password file for tftpconnection

I am trying to use tFTPConnection to download certain files from an FTP site.
It is a regular FTP connection, connecting on port 21.
I would like to be able to read the password from a file rather than hard coding the password to the job.
At the minute I'm simply making the connection and then printing success:
Any advice on how this could be approached or solved?
Talend supports the idea of context variables which allow you to define at run time the values used for them.
This is typically used so you can "contextualise" a connection and then deploy the job in multiple environments and have the connection be to the environment specific end point.
For instance, a job might need to connect to a database but that database is different for each of a development, a testing and a production environment.
Instead of hard coding the connection parameters to the job we instead create some context variables under a context group and refer to these context variables in the connection parameters:
Now, at run time we have the Talend job load these contexts from a file with the relevant connection parameters using an implicit context load:
In this case, the job will read the context variables at run time from a CSV called test.csv that looks like:
Now, when this job is ran it will attempt to connect to localhost:3306/test with the root user and an empty password.
If we have another context file on another machine (but with the same file path) then this could refer to a database on some other server or simply using different credentials and the job would connect to this other database instead.
For your use case you can simply create a single context group with the FTP connection settings, including the password (or potentially just contextualise the password), and then refer to it in the same way: