I'm following the actions-on-google/smart-home-nodejs tutorial and I'm stuck in step Connect to Firebase, in "firebase use --add ", the answer is
Error: Invalid project selection, please verify project hometeste-4a254 exists and you have access.",
but the project exist.
marcelo#cloudshell:~/smart-home-nodejs (hometeste-4a254)$ firebase list
┌──────────────────┬───────────────────────┬─────────────┐
│ Name │ Project ID / Instance │ Permissions │
├──────────────────┼───────────────────────┼─────────────┤
│ GoogleHomeTeste │ hometeste-4a254 │ Owner │
└──────────────────┴───────────────────────┴─────────────┘
marcelo#cloudshell:~/smart-home-nodejs (hometeste-4a254)$ firebase use --add hometeste-4a254
Error: Invalid project selection, please verify project
hometeste-4a254 exists and you have access.
Related
I have a running Terraform infrastructure which configures an Aurora-postgresql Serverless RDS database. Also it uses postgresql provider to configure the database roles, for example using postgresql_role resource and others.
I need to destroy this RDS serverless DB and replace it with a provisioned one.
I have updated the TF aws_rds_cluster resource and when trying to deploy it with my new provisioned config using terraform apply, I get the following errors for the postgresql provider resources (which are unchanged):
Error: could not start transaction: dial tcp 127.0.0.1:5432: connect: connection refused
│
│ with postgresql_database.testdb,
│ on main.aws.db.tf line 54, in resource "postgresql_database" "testdb":
│ 54: resource "postgresql_database" "testdb" {
│
╵
╷
│ Error: dial tcp 127.0.0.1:5432: connect: connection refused
│
│ with postgresql_role.materialized_view_owner,
│ on main.aws.db.tf line 62, in resource "postgresql_role" "materialized_view_owner":
│ 62: resource "postgresql_role" "materialized_view_owner" {
This seems to indicate the provider is not able to connect to the database or uses incorrect endpoint. Yet the serverless database is running and the provider is configured to use it's endpoint as the host. At the moment the errors appear, Terraform has not yet seen the aws_rds_cluster change from serverless to provisioned.
Provider config below:
provider "postgresql" {
host = aws_rds_cluster.postgresql.endpoint
port = 5432
database = aws_rds_cluster.postgresql.database_name
username = aws_rds_cluster.postgresql.master_username
password = aws_rds_cluster.postgresql.master_password
sslmode = "require"
superuser = false
expected_version = "10.18"
}
I don't really have any ideas what could cause this. Could someone help out? Thanks!
EDIT:
Adding resource definitions per request of #Marcin
postgresql provider resources:
resource "postgresql_database" "testdb" {
name = "testdb"
}
resource "postgresql_role" "view_owner" {
name = "view_owner"
login = false
}
resource "postgresql_grant" "view_owner" {
database = postgresql_database.testdb.name
role = postgresql_role.view_owner.name
object_type = "table"
schema = "public"
privileges = ["SELECT"]
}
resource "postgresql_default_privileges" "view_owner" {
database = postgresql_database.testdb.name
role = postgresql_role.view_owner.name
schema = "public"
owner = aws_rds_cluster.postgresql.master_username
object_type = "table"
privileges = ["SELECT"]
}
resource "postgresql_grant_role" "view_owner_master_user" {
role = aws_rds_cluster.postgresql.master_username
grant_role = postgresql_role.view_owner.name
with_admin_option = true
}
To fix this I used terraform apply -target=aws_rds_cluster.postgresql to deploy just the DB cluster changes and afterwards found another issue with my TF RDS config. The Aurora cluster was created but I didn't realize I need to also specify a separate TF resource for an instance. After fixing that I re-ran the full TF apply without any targeting and the postgresql provider errors are gone.
Credit and thanks for help with debugging the issue goes to #MartinAtkins
I have a docker-compose file with two services: odoo:latest and postgres:13.
I try to update app list in debug mode http://localhost:8070/web?debug=1
but nothing happens.
odoo service has volumes section:
`volumes:
- ./extra-addons:/mnt/extra-addons
- data:/var/lib/odoo
- ./config:/etc/odoo`
that corresponds to a this directory:
.
├── config
│ └── odoo.conf
├── docker-compose.yml
└── extra-addons
├── __init__.py
└── __manifest__.py
Content of of oodo.conf file:
[options]
addons_path = /mnt/extra-addons
data_dir = /var/lib/odoo
Content of manifest.py
{
'name': 'estate',
'depends': [
'base_setup',
],
"installable": True,
"application": True,
"auto_install": False,
}
I also tried to update app list in the terminal.
When I exec in odoo service and issue this command:
/usr/bin/odoo -p 8071 --db_host=172.17.0.2 --db_user=odoo --db_password=odoo -d odoo -u estate
I get:
odoo: Odoo version 16.0-20221025
odoo: Using configuration file at /etc/odoo/odoo.conf
odoo: addons paths: ['/usr/lib/python3/dist-packages/odoo/addons', '/var/lib/odoo/addons/16.0', '/mnt/extra-addons']
odoo: database: odoo#172.17.0.2:default
odoo.sql_db: Connection to the database failed
Traceback (most recent call last):
File "/usr/bin/odoo", line 8, in
odoo.cli.main()
...
File "/usr/lib/python3/dist-packages/odoo/sql_db.py", line 638, in borrow
result = psycopg2.connect(
File "/usr/lib/python3/dist-packages/psycopg2/__init__.py", line 127, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: connection to server at "172.17.0.2", port 5432 failed: Connection timed out
Is the server running on that host and accepting TCP/IP connections?
THANK YOU!
Every module needs it's own directory.
As i see, you've put __init__.py and __manifest__.py inside extra_addons folder.
extra_addons should contain module folders, so correct structure should be:
.
├── config
│ └── odoo.conf
├── docker-compose.yml
└── extra-addons
└── estate
├── __init__.py
└── __manifest__.py
where estate is your module code (which for best practice should be the same as "name" key in manifest dictionary)
Furthermore, if you added ./extra-addons:/mnt/extra-addons volume after you ran your docker-compose, you should run it again.
This will only re-create your odoo container since postgres container has not changed, meaning you won't lose any data.
I have created a new environment for my application and called it docker. I'm trying stuff out so I set it like this:
application-docker.yml
micronaut:
application:
name: time
server:
netty:
access-logger:
enabled: true
logger-name: access-logger
datasources:
default:
url: jdbc:postgresql://db:5432/postgres
driverClassName: org.postgresql.Driver
username: postgres
password: postgres
schema-generate: CREATE_DROP
dialect: POSTGRES
schema: time
jpa.default.properties.hibernate.hbm2ddl.auto: update
flyway:
datasources:
default:
enabled: true
schemas: time
...
However when I try to run my app like this:
java -jar target/timeshare-0.1.jar -Dmicronaut.environments=docker -Dcom.sun.management.jmxremote -Xmx128m
If fails... because it can't connect to localhost!
08:11:00.949 [main] INFO com.zaxxer.hikari.HikariDataSource - HikariPool-1 - Starting...
08:11:02.013 [main] ERROR com.zaxxer.hikari.pool.HikariPool - HikariPool-1 - Exception during pool initialization.
org.postgresql.util.PSQLException: Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
at org.postgresql.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:285)
Why is it trying to connect to localhost? What am i missing?
It seems that Micronaut is not able to locate application-docker.yml file and then it is using the default one.
Because you can use for example -Dmicronaut.environments=not-existing-profile and even if it does not exist, it does not show any error.
So, make sure you have application-docker.yml file in the src/main/resources directory and also that the file is really exported into the result jar during build and is located in the root of the jar archive:
target/timeshare-0.1-all.jar
├── com
├── META-INF
├── org
├── application-docker.yml
├── application.yml
├── logback.xml
...
How are you building the result jar? When you use the shadowJar task then it must contain everything.
Another option is to use MICRONAUT_ENVIRONMENTS system variable:
export MICRONAUT_ENVIRONMENTS=docker
But this behaves the same way as -Dmicronaut.environments=docker startup option.
Another option is to specify exact path to the application-docker.yml configuration file by the micronaut.config.files startup option:
java -jar target/timeshare-0.1-all.jar -Dmicronaut.config.files=/some/external/location/application-docker.yml
I'm trying to update my 21-points.com app to use the latest code from the JHipster Mini-Book. Here’s what I’ve done so far:
Use Heroku’s pg:pull command to copy my production database to a local database:
heroku pg:pull HEROKU_POSTGRESQL_GRAY_URL health --app health-by-points
Installed Liquibase and copied the PostgreSQL database driver into $LIQUIBASE_HOME/lib.
[mraible:/opt/tools/liquibase-3.5.3-bin] % cp ~/.m2/repository/org/postgresql/postgresql/9.4.1212/postgresql-9.4.1212.jar lib/.
Ran liquibase’s “diffChangeLog” to generate a changelog to migrate from old database to new schema.
./liquibase \
--driver=org.postgresql.Driver --url=jdbc:postgresql://localhost:5432/health --username=postgres \
diffChangeLog \
--referenceUrl=jdbc:postgresql://localhost:5432/twentyonepoints \
--referenceUsername=twentyonepoints --referencePassword=21points
Copied output to src/main/resources/config/liquibase/changelog/20171024115100_migrate_from_v2.xml and added to config/liquibase/master.xml.
Changed application-prod.xml in my latest-and-greatest JHipster app to point to old database that I downloaded from Heroku.
Ran ./gradlew -Pprod. Error on startup:
org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'liquibase' defined in class path resource [org/jhipster/health/config/DatabaseConfiguration.class]: Invocation of init method failed; nested exception is liquibase.exception.ValidationFailedException: Validation Failed:
2 change sets check sum
config/liquibase/changelog/00000000000000_initial_schema.xml::00000000000000::jhipster was: 7:eda8cd7fd15284e6128be97bd8edea82 but is now: 7:a6235f40597a13436aa36c6d61db2269
config/liquibase/changelog/00000000000000_initial_schema.xml::00000000000001::jhipster was: 7:e87abbb935c561251405aea5c91bc3a4 but is now: 7:29cf7a7467c2961e7a2366c4347704d7
Ran the following commands to update the md5sum’s in the database.
update databasechangelog set md5sum = '7:a6235f40597a13436aa36c6d61db2269' where md5sum = '7:eda8cd7fd15284e6128be97bd8edea82';
update databasechangelog set md5sum = '7:29cf7a7467c2961e7a2366c4347704d7' where md5sum = '7:e87abbb935c561251405aea5c91bc3a4’;
Tried to run again.
2017-10-24 12:12:02.670 ERROR 22960 --- [ main] liquibase : classpath:config/liquibase/master.xml: config/liquibase/changelog/20160831020048_added_entity_Points.xml::20160831020048-1::jhipster: Change Set config/liquibase/changelog/20160831020048_added_entity_Points.xml::20160831020048-1::jhipster failed. Error: ERROR: relation "points" already exists [Failed SQL: CREATE TABLE public.points (id BIGINT NOT NULL, jhi_date date NOT NULL, exercise INT, meals INT, alcohol INT, notes VARCHAR(140), user_id BIGINT, CONSTRAINT PK_POINTS PRIMARY KEY (id))]
2017-10-24 12:12:02.672 WARN 22960 --- [ main] ationConfigEmbeddedWebApplicationContext : Exception encountered during context initialization - cancelling refresh attempt: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'liquibase' defined in class path resource [org/jhipster/health/config/DatabaseConfiguration.class]: Invocation of init method failed; nested exception is liquibase.exception.MigrationFailedException: Migration failed for change set config/liquibase/changelog/20160831020048_added_entity_Points.xml::20160831020048-1::jhipster:
Reason: liquibase.exception.DatabaseException: ERROR: relation "points" already exists [Failed SQL: CREATE TABLE public.points (id BIGINT NOT NULL, jhi_date date NOT NULL, exercise INT, meals INT, alcohol INT, notes VARCHAR(140), user_id BIGINT, CONSTRAINT PK_POINTS PRIMARY KEY (id))]
Any idea why it’s trying to create my database tables again? Is it because I changed the md5sum?
Do these look like the proper steps you’d take when migrating from an old JHipster-generated schema to a brand new one? The only difference between my old schema and new one is the old one has a preferences_id column in the user table while the new one has a user_id column in the preferences table.
I would do almost the same: on step 7 I wold relay on liquibase methods for working with the checksum, i.e. I would use clearCheckSums from command line and then on next run all the checksums are gone be recomputed and if there still is an error about a script that wants to run again like in step 8 then you can use changelogSync from command line like:
./liquibase --driver=org.postgresql.Driver --url=jdbc:postgresql://localhost:5432/health
--username=postgres clearCheckSums
./liquibase --driver=org.postgresql.Driver --url=jdbc:postgresql://localhost:5432/health
--username=postgres changelogSync
I'm trying to install a Chef server on an Ubuntu 14.04 box. I've downloaded the .deb file from the site and installed with sudo dpkg -i chef-server-core_12.0.8-1_amd64.deb but when I do sudo chef-server-ctl reconfigure all goes well until it reaches the postgresql part:
Running handlers:
[2015-05-03T23:16:07-04:00] ERROR: Running exception handlers
Running handlers complete
[2015-05-03T23:16:07-04:00] ERROR: Exception handlers complete
[2015-05-03T23:16:07-04:00] FATAL: Stacktrace dumped to /opt/opscode/embedded/cookbooks/cache/chef-stacktrace.out
Chef Client failed. 44 resources updated in 198.107797872 seconds
[2015-05-03T23:16:08-04:00] FATAL: Mixlib::ShellOut::ShellCommandFailed: private-chef_pg_database[opscode-pgsql] (private-chef::postgresql line 127) had an error: Mixlib::ShellOut::ShellCommandFailed: execute[create_database_opscode-pgsql] (/opt/opscode/embedded/cookbooks/cache/cookbooks/private-chef/providers/pg_database.rb line 13) had an error: Mixlib::ShellOut::ShellCommandFailed: Expected process to exit with [0], but received '1'
---- Begin output of createdb --template template0 --encoding UTF-8 opscode-pgsql ----
STDOUT:
STDERR: createdb: could not connect to database template1: FATAL: role "opscode-pgsql" does not exist
---- End output of createdb --template template0 --encoding UTF-8 opscode-pgsql ----
Ran createdb --template template0 --encoding UTF-8 opscode-pgsql returned 1
Am I missing any step? The installation instruction does not say anything about any other middle task to perform.
Thank you very much for any help you can give me.
Saw exact same error installing chef-server-core-12.0.8-1.el6.x86_64.rpm on RHEL6.6. Had gone through every pre-req step listed at https://docs.chef.io/install_server_pre.html ... finally chalked it up to having multiple versions of Postgres on the system.
Attempted the same install and reconfigure command (chef-server-ctl reconfigure) on a system without Postgres installed and had success.
HTH.
Cannot reproduce your problem. The installation of chef server 12.0.8-1 worked fine on my kitchen/vagrant setup
Example
├── Berksfile
├── .kitchen.yml
├── roles
└── chef-server.json
Berksfile
source "https://supermarket.chef.io"
cookbook "chef-server"
kitchen.yml
---
driver:
name: vagrant
provisioner:
name: chef_zero
platforms:
- name: ubuntu-14.04
driver:
network:
- ["private_network", {ip: "192.168.38.34"}]
customize:
memory: 4096
suites:
- name: default
run_list:
- role[chef-server]
attributes:
roles/chef-server.json
{
"name": "chef-server",
"description": "Chef server",
"run_list": [
"recipe[chef-server]"
]
}