Symfony JSON auth setup - Undefined table: 7 ERROR: relation "user" does not exist - postgresql

I'm new to Symfony and I'm trying to setup a REST API with a JSON web token authentication system.
When I'm running the server and I try to call the login route, I'm getting the following error, with a 500 Server Error response:
SQLSTATE[42P01]: Undefined table: 7 ERROR: relation "user" does not exist
I'm using a docker container to run the DB, here is the error in it:
STATEMENT: SELECT t0.id AS id_1, t0.email AS email_2, t0.roles AS roles_3, t0.password AS password_4 FROM "user" t0 WHERE t0.email = $1 LIMIT 1
ERROR: relation "user" does not exist at character 96
But I can access my database using DBeaver and execute the statement without any problem.
I'm using the following security.yml configuration (default values as much as possible):
security:
password_hashers:
Symfony\Component\Security\Core\User\PasswordAuthenticatedUserInterface: "auto"
providers:
app_user_provider:
entity:
class: App\Entity\User
property: email
firewalls:
dev:
pattern: ^/(_(profiler|wdt)|css|images|js)/
security: false
main:
lazy: true
provider: app_user_provider
json_login:
check_path: api_login
username_path: email
password_path: password
access_control:
# - { path: ^/admin, roles: ROLE_ADMIN }
# - { path: ^/profile, roles: ROLE_USER }
when#test:
security:
password_hashers:
Symfony\Component\Security\Core\User\PasswordAuthenticatedUserInterface:
algorithm: auto
cost: 4 # Lowest possible value for bcrypt
time_cost: 3 # Lowest possible value for argon
memory_cost: 10 # Lowest possible value for argon
I have followed the official doc about Security setup and JSON Login. Here are my User and ApiLoginController:
<?php
namespace App\Entity;
use App\Repository\UserRepository;
use Doctrine\ORM\Mapping as ORM;
use Symfony\Component\Security\Core\User\PasswordAuthenticatedUserInterface;
use Symfony\Component\Security\Core\User\UserInterface;
#[ORM\Entity(repositoryClass: UserRepository::class)]
#[ORM\Table(name: '`user`')]
class User implements UserInterface, PasswordAuthenticatedUserInterface
{
// ... Code left untouched here ...
}
<?php
namespace App\Controller;
use Symfony\Bundle\FrameworkBundle\Controller\AbstractController;
use Symfony\Component\HttpFoundation\JsonResponse;
use Symfony\Component\Routing\Annotation\Route;
class ApiLoginController extends AbstractController
{
#[Route('/api/login', name: 'api_login')]
public function index(): JsonResponse
{
return $this->json([
'message' => 'Welcome to your new controller!',
'path' => 'src/Controller/ApiLoginController.php',
]);
}
}
I have read about the "user" (with quotes) table name can be a problem with Postgres, but the issue is still here even using another name.
More infos:
Windows 10 or Linux Manjaro (tested on both)
Symfony version - 6.0
Doctrine Bundle version - 2.7
PHP version - 8.1.8
Docker image - postgres:13-alpine
Reproductible repo: https://github.com/Clm-Roig/symfony6-setup-issue

I recommend you not to implement your own implementation for the use of jwts but to use for example an already existing and already established implementation.
The contribution of the Symfony documentation covers only the part of an authentication, but not the generation of the actual JWT. Especially the validation of the JWT is mandatory if you want to use the JWT for stateless authentication.
It doesn't solve your actual problem for now, but I strongly recommend you to think about using an appropriate bundle that generates the JWTs and takes care of the authentication.
Maybe this is a good bundle: https://github.com/lexik/LexikJWTAuthenticationBundle

Related

How to get Azure AD Group in Bicep to create SQL Server with azureADOnlyAuthentication

Is there a way to create or access an existing Azure AD Group using Azure Bicep. The scenario is that I want to create an Azure SQL Database, but in order to do so I need to create a server first. I want to create the server with an AD group as an administrator so I don't have passwords/secrets to manage. I also want to use managed identities for access.
Is there a way to get the group name and sid? When I create a resource in bicep (i.e. resource sqlAdminGroup...) and search for 'group', I don't see a
Here is my bicep code:
resource sqlServer 'Microsoft.Sql/servers#2022-02-01-preview' = {
name: '${namePrefix}sqlserver1'
location: location
properties: {
administrators: {
administratorType: 'ActiveDirectory'
azureADOnlyAuthentication: true
principalType: 'Group'
login: sqlAdminGroupName
sid: sqlAdminGroupObjectId
tenantId: subscription().tenantId
}
publicNetworkAccess: 'Enabled'
restrictOutboundNetworkAccess: 'Disabled'
//subnetId: resourceId('Microsoft.Network/virtualNetworks/subnets', virtualNetworkName, subnetName)
}
identity: {
type: 'SystemAssigned'
}
}
I assume this is a common approach but I have not really found much on it when searching. I would like to create the group if it doesn't exist and get the the login (sqlAdminGroupName) and sid (sqlAdminGroupObjectId) regardless for use in the above code.
Just got mine to work, maybe this help you as well, there were 2 things that I had to change to get mine to deploy.
First, did not specify admin login or password under properties, second, the 'login' string, does not have to be the same as your actual AAD group, in my instance, the AAD group had spaces in it and was causing an error.
Here is my bicep, maybe it helps you or someone:
resource sqlServer 'Microsoft.Sql/servers#2022-02-01-preview' = {
location: location
name: 'sql${name}'
properties: {
version: '12.0'
administrators: {
administratorType: 'ActiveDirectory'
principalType: 'Group'
login: 'MyFunkyAdminGroupNameNotSameAsAAD'
sid: '0000-my-aad-group-id-0000'
tenantId: subscription().tenantId
}
}
}

How to store mongo db backups to google drive using Symfony 3.4

I am trying to upload mongo db backup to google drive
I am installing following bundles dizda/cloud-backup-bundle and Happyr
/
GoogleSiteAuthenticatorBundle for adapters I am using cache/adapter-bundle
configuration:
dizda_cloud_backup:
output_file_prefix: '%dizda_hostname%'
timeout: 300
processor:
type: zip # Required: tar|zip|7z
options:
compression_ratio: 6
password: '%dizda_compressed_password%'
cloud_storages:
google_drive:
token_name: 'AIzaSyA4AE21Y-YqneV5f9POG7MPx4TF1LGmuO8' # Required
remote_path: ~ # Not required, default "/", but you can use path like "/Accounts/backups/"
databases:
mongodb:
all_databases: false # Only required when no database is set
database: '%database_name%'
db_host: '%mongodb_backup_host%'
db_port: '%mongodb_port%'
db_user: '%mongodb_user%'
db_password: '%mongodb_password%'
cache_adapter:
providers:
my_redis:
factory: 'cache.factory.redis'
happyr_google_site_authenticator:
cache_service: 'cache.provider.my_redis'
tokens:
google_drive:
client_id: '85418079755-28ncgsoo91p69bum6ulpt0mipfdocb07.apps.googleusercontent.com'
client_secret: 'qj0ipdwryCNpfbJQbd-mU2Mu'
redirect_url: 'http://localhost:8000/googledrive/'
scopes: ['https://www.googleapis.com/auth/drive']
when I use factory: 'cache.factory.mongodb' getting
You have requested a non-existent service "cache.factory.mongodb" this while running server and while running backup command getting
Something went terribly wrong. We could not create a backup. Read your log files to see what caused this error
I verified logs getting Command "--env=prod dizda:backup:start" exited with code "1" {"command":"--env=prod dizda:backup:start","code":1} []
I am not sure which adapter needs to use and what's going on here.
Can someone help me? Thanks in advance

SAILS-CBES adapter key, what is it?

I had issues correctly configuring my couchbase adapter in sails-js. I am using the sails-cbes adapter. The documentation fails to mention the key to use. For any who might struggle as I did, below is my configuration file:
{
...
//couchbase
cb: {
adapter: 'sails-cbes',
host: 'localhost',
port: 8091,
user: 'user',
pass: 'password',
bucket: {
name: 'bucket',
pass: 'bucketPassword'
}
}
},
...
Assuming that by 'key' you refer to the 'password' fields:
The first password is the one you set up in the dialogue the first time you log in to https://localhost:8091.
The bucket is not being created automatically so you would have to do that manually in couchbase. Then you have the option to set a password for the bucket itself, but the default is just empty string. Elasticsearch indexing is automated as long as you declare the mapping in the model.
The configuration file should be in sails-project/config/connections.js and it should look something like this:
sailsCbes: {
adapter: 'sails-cbes',
cb: { ... },
es: { ... }
}
You can try it out by creating a model within sails that uses this connection.
As for the dependencies, you need to install couchbase and elasticsearch yourself, then from the sails-cbes folder do a sudo npm install and you should be good to go. For test dependencies, run npm install inside the test folder.
Hope this helps
I think you don't understand how sailsjs adapter works.
Please spend some time and read the documentation of sailsjs, specially the connections configuration (adapters)
http://sailsjs.org/#!/documentation/reference/sails.config/sails.config.connections.html

Mongoid sessions not found

Trying out a Sinatra | Mongoid 3. I run into the following error, whenever I attempt to save to the database.
Mongoid::Errors::NoSessionsConfig:
Problem:
No sessions configuration provided.
Summary:
Mongoid's configuration requires that you provide details about each session that can be connected to, and requires in the sessions config at least 1 default session to exist.
Resolution:
Double check your mongoid.yml to make sure that you have a top-level sessions key with at least 1 default session configuration for it. You can regenerate a new mongoid.yml for assistance via `rails g mongoid:config`.
Example:
development:
sessions:
default:
database: mongoid_dev
hosts:
- localhost:27017
from /Users/rhodee/.rbenv/versions/1.9.3-p194/lib/ruby/gems/1.9.1/gems/mongoid-3.0.13/lib/mongoid/sessions/factory.rb:61:in `create_session'
I've already confirmed the following:
Mongoid.yml file is loaded
The hash contains correct environment and db name
Using pry the return value from the Mongoid.load! method returns:
=> {"sessions"=>
{"default"=>
{"database"=>"bluster",
"hosts"=>["localhost:27017"],
"options"=>{"consistency"=>"strongĀ "}}}}
If it's any help check, I've added the app.rb file and mongoid.yml file as well.
App.rb
require 'sinatra'
require 'mongoid'
require 'pry'
require 'routes'
require 'location'
configure :development do
enable :logging, :dump_errors, :run, :sessions
Mongoid.load!(File.join(File.dirname(__FILE__), "config", "mongoid.yml"))
end
Mongoid.yml
development:
sessions:
default:
database: bluster
hosts:
- localhost:27017
options:
consistency: strongĀ 
require 'sinatra'
require 'mongoid'
require 'pry'
require 'routes'
configure :development do
enable :logging, :dump_errors, :run, :sessions
Mongoid.load!(File.join(File.dirname(__FILE__), "config", "mongoid.yml"))
end
get '/db' do
"db: " << Mongoid.default_session[:moped].database.inspect
end
I put together an example, and it is working just fine for me. Probably your problem is something else, like the config file not having read access or something else. Anyways my config file is identical as yours and this is my sinatra file, and it works fine.

Issues with Doctrine Versionable (Audit Log) Behavior

Im using the Doctrine versionable behavior for one of my models. The schema works fine & tables are created. But when I try to load fixtures for this, I get a fatal error saying class TaxCodeVersion not found. I checked my Model dir, and indeed the class TaxCodeVersion is not generated by Doctrine. I always use the build --all --no-confirmation command. Am I missing on something?
TaxCode:
package: Taxes
tableName: Fin_Tax_Codes
actAs:
Activateable: ~
SoftDelete: ~
Versionable:
tableName: fin_tax_codes_version
versionColumn: version
className: %CLASS%Version
auditLog: true
Auditable: ~
Timestampable: ~
Multitenant: ~
columns:
id:
type: integer(4)
primary: true
notnull: true
autoincrement: true
.....other columns.....
I've logged a bug here: Pl go thru it and vote if it affects you
If you can't live without this, then you can carefully set-up the model schema for the version class by manually creating a class file in the model directory and its parent in the base directory.
Make sure that there are no relations on the version table, all unique indexes need to be dropped.