Setting up Vapor and MongoKitten - swift

After successfully deploying on localhost (0.0.0.0:8080)
While I pushed the code to git for heroku,
I am getting error on heroku.
Cannot connect to MongoDB
Process exited with status 0
Inside package file I added
dependencies: [
.Package(url: "https://github.com/vapor/vapor.git", majorVersion: 1, minor: 1),
.Package(url: "https://github.com/OpenKitten/MongoKitten.git", majorVersion: 3)
],
Inside main.swift the program exists on this line
let mongoDatabase = try Database(mongoURL: "mongodb://localhost/mydatabase")
Additional info: I believe that on commit there is something which is left out by the SourceTree.
As the same code is also not working after checkout on a different machine. And the code is compiling perfect.

here is an info how to get the DATABASE_URL for postgres but it shold be the same for mongo DB:
heroku addons:create heroku-postgresql:hobby-dev
after some minits preparation:
heroku config
there shold be the url
in your Procfile (you created with vapor init) you can add the db url:
our text editor and update it so that looks like the below.
web: App --env=production --workdir="./"
web: App --env=production --workdir=./ --config:servers.default.port=$PORT --config:postgresql.url=$DATABASE_URL
this is the db url we add --config:postgresql.url=$DATABASE_URL
Save Procfile and type git push heroku master
after some time it should working.
you should change the name postgresql.url to your mongo db confog name (depending which mongo addon you use)

Related

Containerized .net core application using Docker Compose does not resolve MongoDb container name

I have developed in Visual Studio 2017 (version 15.9.16) a simple .Net Core 2.2 unit test project that connects to a MongoDb instance enabling Container Orchrestation Support. My purpose is running a container with a MongoDb instance where the unit tests will connect to whenever they are launched from Test Explorer window. The problem I am facing is that from the unit test code I cannot connect to the MongoDb container, the service name defined in docker-compose.yml cannot be resolved.
Here are the contents of the docker-compose.yml file:
version: '3.4'
services:
myapp:
image: ${DOCKER_REGISTRY-}myapp
build:
context: .
dockerfile: myapp/Dockerfile
depends_on:
- mongo
mongo:
image: mongo:latest
The contents of the Dockerfile of the app are the following:
FROM microsoft/dotnet:2.2-runtime AS base
WORKDIR /app
FROM microsoft/dotnet:2.2-sdk AS build
WORKDIR /src
COPY myapp/myapp.csproj myapp/
RUN dotnet restore myapp/myapp.csproj
COPY . .
WORKDIR /src/myapp
RUN dotnet build myapp.csproj -c Release -o /app
FROM build AS publish
RUN dotnet publish myapp.csproj -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "myapp.dll"]
Inside the unit test code, if I try to connect to the MongoDb instance using var client = new MongoClient("mongodb://mongo:27017"); when trying to write to the database I get the following exception:
A timeout occured after 30000ms selecting a server using
CompositeServerSelector{ Selectors =
MongoDB.Driver.MongoClient+AreSessionsSupportedServerSelector,
LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000
} }. Client view of cluster state is { ClusterId : "1", ConnectionMode
: "Automatic", Type : "Unknown", State : "Disconnected", Servers : [{
ServerId: "{ ClusterId : 1, EndPoint : "Unspecified/mongo:27017" }",
EndPoint: "Unspecified/mongo:27017", State: "Disconnected", Type:
"Unknown", HeartbeatException:
"MongoDB.Driver.MongoConnectionException: An exception occurred while
opening a connection to the server. --->
System.Net.Sockets.SocketException: Unknown host at
System.Net.Dns.HostResolutionEndHelper(IAsyncResult asyncResult) at
System.Net.Dns.EndGetHostAddresses(IAsyncResult asyncResult) at
System.Net.Dns.<>c.b__25_1(IAsyncResult
asyncResult) at
System.Threading.Tasks.TaskFactory`1.FromAsyncCoreLogic(IAsyncResult
iar, Func`2 endFunction, Action`1 endAction, Task`1 promise, Boolean
requiresSynchronization)
--- End of stack trace from previous location where exception was thrown --- at
MongoDB.Driver.Core.Connections.TcpStreamFactory.ResolveEndPointsAsync(EndPoint
initial) at
MongoDB.Driver.Core.Connections.TcpStreamFactory.CreateStreamAsync(EndPoint
endPoint, CancellationToken cancellationToken) at
MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelperAsync(CancellationToken
cancellationToken) --- End of inner exception stack trace --- at
MongoDB.Driver.Core.Connections.BinaryConnection.OpenHelperAsync(CancellationToken
cancellationToken) at
MongoDB.Driver.Core.Servers.ServerMonitor.HeartbeatAsync(CancellationToken
cancellationToken)", LastUpdateTimestamp:
"2019-10-03T10:00:40.7989018Z" }] }.'
If I try to resolve MongoDb container service name using System.Net.Dns.GetHostEntry("mongo") I get this exception:
System.Net.SocketException: 'Unknown host'
It seems clear to me that the .Net Core code inside the unit test container cannot resolve docker-compose.yml service names. On the other hand, if I start a session in the unit test container I can do a ping mongo or telnet mongo 27017 with success. I have also tried ensuring that MongoDb container has started as proposed in this question, but with no luck. Something must be missing in my code or the docker configuration files to enable service name resolution. Any help would be much appreciated.
You can declare a network and associate an IP adress for each service, then use the ip address of mongo service instead of resolving its hostname:
version: '3.4'
services:
myapp:
image: ${DOCKER_REGISTRY-}myapp
build:
context: .
dockerfile: myapp/Dockerfile
depends_on:
- mongo
networks:
mynetwork:
ipv4_address: 178.25.0.3
mongo:
image: mongo:latest
networks:
mynetwork:
ipv4_address: 178.25.0.2
networks:
mynetwork:
driver: bridge
ipam:
config:
- subnet: 178.25.0.0/24
MongoClient("mongodb://178.25.0.2:27017");
Finally I found the reason for this behavior after running System.Net.Dns.GetHostName() inside my unit test code and seeing that the returning name was the host name and not the container name. I forgot to mention that I was running my unit test from the Test Explorer window and this way container support is completely ignored. Opposite from what I expected, Docker Compose is not being invoked so neither MongoDb container nor unit test project container are launched, unit test project is just running inside the host. The feature of running unit tests in container when container support is enabled in Visual Studio is already asked at https://developercommunity.visualstudio.com/idea/554907/allow-running-unit-tests-in-docker.html. Please vote up for this feature!

Gradle Docker tasks

For my local development tasks
1. i want to ensure that the DB is running in the docker container which in this case is Postgres, i have a bootRun task defined in my build.gradle file
bootRun{
jvmArgs = [
"-Ddb.host=jdbc:postgresql://localhost:5432/postgres",
"-Ddb.username=postgres",
"-Ddb.password=apgdb"
]
}
and docker installed on my machine i just want to ensure that i do not have to manually go and start the postgres image from terminal and then do a bootRun,
can we create a gradle task which can ensure that it restarts the postgres on every exit of bootRun and start everytime we spin the app.
I use the gradle-docker-compose plugin to achieve this kind of task. You can create a docker-compose.yml file that defines your postgres db:
services:
db:
image: postgres:11
ports:
- "5432:5432"
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: apgdb
POSTGRES_DB: postgres
And that would be the respective build.gradle file:
plugins {
id "com.avast.gradle.docker-compose" version "0.8.14"
}
dockerCompose {
database {
useComposeFiles = ['docker-compose.yml']
}
}
bootRun{
dependsOn 'databaseComposeUp'
jvmArgs = [
"-Ddb.host=jdbc:postgresql://localhost:5432/postgres",
"-Ddb.username=postgres",
"-Ddb.password=apgdb"
]
}
Now when you run gradle bootRun it will start up the database before spring boots up.

Heroku can't connect with Postgres DB/Knex/Express

I have an Express API deployed to Heroku, but when I attempt to run the migrations, it throws the following error:
heroku run knex migrate:latest Running knex migrate:latest on ⬢
bookmarks-node-api... up, run.9925 (Free) Using environment:
production Error: connect ECONNREFUSED 127.0.0.1:5432
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1117:14)
In my knexfile.js, I have:
production: {
client: 'postgresql',
connection: {
database: process.env.DATABASE_URL
},
pool: {
min: 2,
max: 10
},
migrations: {
directory: './database/migrations'
}
}
I also tried assigning the migrations directory to tableName: 'knex_migrations' which throws the error:
heroku run knex migrate:latest Running knex migrate:latest on ⬢
bookmarks-node-api... up, run.7739 (Free) Using environment:
production Error: ENOENT: no such file or directory, scandir
'/app/migrations'
Here is the config as set in Heroku:
-node-api git:(master) heroku pg:info
=== DATABASE_URL
Plan: Hobby-dev
Status: Available
Connections: 0/20
PG Version: 10.7
Created: 2019-02-21 12:58 UTC
Data Size: 7.6 MB
Tables: 0
Rows: 0/10000 (In compliance)
Fork/Follow: Unsupported
Rollback: Unsupported
I think the issue is that for some reason, it is looking at localhost for the database, as if the environment is being read as development though the trace shows Using environment: production.
When you provide an object as your connection you're providing individual parts of the connection information. Here, you're saying that the name your database is everything contained in process.env.DATABASE_URL:
connection: {
database: process.env.DATABASE_URL
},
Any keys you don't provide values for fall back to defaults. An example is the host key, which defaults to the local machine.
But the DATABASE_URL environment variable contains all of the information that you need to connect (host, port, user, password, and database name) in a single string. That whole value should be your connection setting:
connection: process.env.DATABASE_URL,
You should check to see if the Postgres add-on is setup as described in these docs since the DATABASE_URL is automatically set for you as stated here.

Laravel 5 - Can not migrate

I am working with Laravel 5.2. I want to add some tables to my database, so I've created a new migration file and tried to run the migration.
When trying to run php artisan migrate I get the following error
[PDOException]
SQLSTATE[HY000] [1045] Access denied for user 'homestead'#'localhost' (using password: YES)
But the password is correct. I can access my DB via Sequel Pro (OS X) and the website itself is working, too (I can create new users etc).
I work with homestead, but changed the default database. I've restarted the VM and tried php artisan config:clear.
My .env:
APP_ENV=local
APP_DEBUG=true
APP_KEY=base64:SDXEyixnQr+qVCH8hbY2bRo3yQtmL8BwEbwY94tDPRc=
APP_URL=http://palabi.dev
DB_CONNECTION=mysql
DB_HOST=127.0.0.1
DB_PORT=3306
DB_DATABASE=palabi
DB_USERNAME=homestead
DB_PASSWORD=password
CACHE_DRIVER=file
SESSION_DRIVER=file
QUEUE_DRIVER=sync
And my Homestead configuration
ip: "192.168.10.10"
memory: 2048
cpus: 1
provider: virtualbox
authorize: ~/.ssh/id_rsa.pub
keys:
- ~/.ssh/id_rsa
folders:
- map: ~/Homesteadprojects/sites
to: /home/vagrant/sites
sites:
- map: test.app
to: /home/vagrant/sites/test
- map: laravel-53.app
to: /home/vagrant/sites/laravel-53/public
- map: palabi.app
to: /home/vagrant/sites/palabi/public
databases:
- homestead
- palabi
What am I doing wrong?
Thanks!
Ok, I've found the problem.
Laravel tried to connect to my local database (where no user homestead exists). But it must connect to the database from the Homestead virtual machine.
In my .env I had to change 127.0.0.1 to the IP from my Homestead configuration file 192.168.10.10.

Alembic / Flask-migrate migration on Heroku runs but does not create tables

I am attempting to deploy a Flask app to Heroku. I have pushed to Heroku and can access my login page but any call to the db gives an OperationalError:
2014-01-29T12:12:31.801772+00:00 app[web.1]: OperationalError: (OperationalError) no such table: projects_project u'SELECT
Using Flask-migrate I can successfully run local migrations and upgrades:
INFO [alembic.migration] Context impl PostgresqlImpl.
INFO [alembic.migration] Will assume transactional DDL.
INFO [alembic.migration] Running upgrade None -> 4b56d58e1d4c, empty message
When I try to upgrade on Heroku using heroku run python manage.py db upgrade the upgrade appears to happen, but the Context impl. is now SQLite?:
(dev01)Toms-MacBook-Pro:dev01 kt$ heroku run python manage.py db upgrade
Running `python manage.py db upgrade` attached to terminal... up, run.9069
INFO [alembic.migration] Context impl SQLiteImpl.
INFO [alembic.migration] Will assume non-transactional DDL.
INFO [alembic.migration] Running upgrade None -> 4b56d58e1d4c, empty message
Running Heroku pg:info gives:
=== HEROKU_POSTGRESQL_PINK_URL (DATABASE_URL)
Plan: Dev
Status: available
Connections: 0
PG Version: 9.3.2
Created: 2014-01-27 18:55 UTC
Data Size: 6.4 MB
Tables: 0
Rows: 0/10000 (In compliance)
Fork/Follow: Unsupported
Rollback: Unsupported
The relevant logs for the Heroku upgrade are:
2014-01-29T12:55:40.112436+00:00 heroku[api]: Starting process with command `python manage.py db upgrade` by kt#gmail.com
2014-01-29T12:55:44.638957+00:00 heroku[run.9069]: Awaiting client
2014-01-29T12:55:44.667692+00:00 heroku[run.9069]: Starting process with command `python manage.py db upgrade`
2014-01-29T12:55:44.836337+00:00 heroku[run.9069]: State changed from starting to up
2014-01-29T12:55:46.643857+00:00 heroku[run.9069]: Process exited with status 0
2014-01-29T12:55:46.656134+00:00 heroku[run.9069]: State changed from up to complete
Also, heroku config gives me:
(dev01)Toms-MacBook-Pro:dev01 kt$ heroku config
=== myapp Config Vars
DATABASE_URL: postgres://xxx.compute-1.amazonaws.com:5432/da0jtkatk6057v
HEROKU_POSTGRESQL_PINK_URL: postgres://xxx.compute-1.amazonaws.com:5432/da0jtkatk6057v
where [xxx == xxx]
How is the Context impl. set? Apart from this obvious difference between working local and heroku, I can't work out what's happening or how I should debug. Thanks.
The URL for the database is taken from the SQLALCHEMY_DATABASE_URI configuration in your Flask app instance. This happens in the env.py configuration for Alembic that was created in the migrations folder.
Are you storing the value of os.environ['DATABASE_URL'] in the configuration before you hand over control to Flask-Migrate and Alembic? It seems you have a default SQLite based database that never gets overwritten with the real one provided by Heroku.