Elixir Postgres view returning empty dataset when testing - postgresql

I am trying to test a view created in Postgres, but it is returning an empty result set. However, when testing out the view in an Elixir interactive shell, I get back the expected data. Here are the steps I have taken to create and test the view:
Create a migration:
def up do
execute """
CREATE VIEW example_view AS
...
Create the schema:
import Ecto.Changeset
schema "test_view" do
field(:user_id, :string)
Test:
describe "example loads" do
setup [
:with_example_data
]
test "view" do
query = from(ev in Schema.ExampleView)
IO.inspect Repo.all(query)
end
end
The response back is an empty array []
Is there a setting that I am missing to allow for views to be tested in test?

As pointed out in one of the comments:
iex, mix phx.server... run on the :dev environment and the dev DB
tests use the :test environment and runs on a separate DB
It actually makes a lot of sense because you want your test suite to be reproducible and independent of whatever records that you might create/edit in your dev env.
You can open iex in the :test environment to confirm that your query returns the empty array here too:
MIX_ENV=test iex -S mix
What you'll need is to populate your test DB with some known records before querying. There are at least 2 ways to achieve that: fixtures and seeds.
Fixtures:
define some helper functions to create records in test/support/test_helpers.ex (typically: takes some attrs, adds some defaults and calls some create_ function from your context)
def foo_fixture(attrs \\ %{}) do
{:ok, foo} =
attrs
|> Enum.into(%{name: "foo", bar: " default bar"})
|> MyContext.create_foo()
foo
end
call them within your setup function or test case before querying
side note: you should use DataCase for tests involving the DB. With DataCase, each test is wrapped in its own transaction and any fixture that you created will be rollback-ed at the end of the test, so tests are isolated and independent from each other.
Seeds:
If you want to include some "long-lasting" records as part of your "default state" (e.g. for a list of countries, categories...), you could define some seeds in priv/repo/seeds.exs.
The file should have been created by the phoenix generator and indicate you how to add seeds (typically use Repo.insert!/1)
By default, mix will run those seeds whenever you run mix ecto.setup or mix ecto.reset just after your migrations (whatever the env used)
To apply any changes in seeds.exs, you can run the following:
# reset dev DB
mix ecto.reset
# reset test DB
MIX_ENV=test mix ecto.reset
If you need some seeds to be environment specific, you can always introduce different seed files (e.g. dev_seeds.exs) and modify your mix.exs to configure ecto.setup.
Seeds can be very helpful not only for tests but for dev/staging in the early stage of a project, while you are still tinkering a lot with your schema and you are dropping the DB frequently.
I usually find myself using a mix of both approaches.

Related

Is it possible to deactivate the autogenerated Drop?

I'm testing Alembic for a python project. The autogeneration is really nice, but dropping is not really helpful if you need to work on customer databases with many different versions for example.
Activate or deactivate Dropping for different scenarios. This would be the best solution.
I made my own configuration in env.py, so I can use more than one base script. But if I make a new script (defining a new table) and autogenerate a migration-script I have an autogenerated drop of all previous migrated tables.
I looked already for the mako-file. How is it possible to integrate a restriction in the mako-file?
I found a possibility to filter my migration-operations-list.
if you hand over a filter-methode who filters your list to the config-flag "process_revision_directives". (all configs in the env.py)
from alembic.operations import ops
def process_revision_directives(context, revision, directives):
script = directives[0]
# process both "def upgrade()", "def downgrade()"
for directive in (script.upgrade_ops, script.downgrade_ops):
# now rewrite the list of "ops" such that DropColumnOp and DropTableOp
# are removed for those tables. Needs a recursive function.
directive.ops = list(
_filter_drop_elm(directive.ops)
)
def _filter_drop_elm(directives):
# given a set of (tablename, schemaname) to be dropped, filter
# out Drop-Op from the list of directives and yield the result.
for directive in directives:
# ModifyTableOps is a container of ALTER TABLE types of
# commands. process those in place recursively.
if isinstance(directive, ops.DropTableOp):
continue
elif isinstance(directive, ops.ModifyTableOps):
directive.ops = list(
_filter_drop_elm(directive.ops)
)
if not directive.ops:
continue
elif isinstance(directive, ops.AlterTableOp) and isinstance(directive, ops.DropColumnOp):
continue
# otherwise if not filtered, yield out the directive
yield directive

Pytest + Appium test framework

I'm very new to automation development, and currently starting to write an appium+pytest based Android app testing framework.
I managed to run tests on a connected device using this code, that seems to use unittest:
class demo(unittest.TestCase):
reportDirectory = 'reports'
reportFormat = 'xml'
dc = {}
driver = None
# testName = 'test_setup_tmotg_demo'
def setUp(self):
self.dc['reportDirectory'] = self.reportDirectory
self.dc['reportFormat'] = self.reportFormat
# self.dc['testName'] = self.testName
self.dc['udid'] = 'RF8MA2GW1ZF'
self.dc['appPackage'] = 'com.tg17.ud.internal'
self.dc['appActivity'] = 'com.tg17.ud.ui.splash.SplashActivity'
self.dc['platformName'] = 'android'
self.dc['noReset'] = 'true'
self.driver = webdriver.Remote('http://localhost:4723/wd/hub',self.dc)
# def test_function1():
# code
# def test_function2():
# code
# def test_function3():
# code
# etc...
def tearDown(self):
self.driver.quit()
if __name__ == '__main__':
unittest.main()
As you can see all the functions are currently within 'demo' class.
The intention is to create several test cases for each part of the app (for example: registration, main screen, premium subscription, etc.). That could sum up to hundreds of test cases eventually.
It seems to me that simply continuing listing them all in this same class would be messy and would give me a very limited control. However I didn't find any other way to arrange my tests while keeping the device connected via appium.
The question is what would be the right way to organize the project so that I can:
Set up the device with appium server
Run all the test suites in sequential order (registration, main screen, subscription, etc...).
Perform the cleaning... export results, disconnect device, etc.
I hope I described the issue clearly enough. Would be happy to elaborate if needed.
Well you have a lot of questions here so it might be good to split them up into separate threads. But first of all you can learn a lot about how Appium works by checking out the documentation here. And for the unittest framework here.
All Appium cares about is the capabilities file (or variable). So you can either populate it manually or white some helper function to do that for you. Here is a list of what can be used.
You can create as many test classes(or suites) as you want and add them together in any order you wish. This helps to break things up into manageable chunks. (See example below)
You will have to create some helper methods here as well, since Appium itself will not do much cleaning. You can use the adb command in the shell for managing android devices.
import unittest
from unittest import TestCase
# Create a Base class for common methods
class BaseTest(unittest.TestCase):
# setUpClass method will only be ran once, and not every suite/test
#classmethod
def setUpClass(cls) -> None:
# Init your driver and read the capabilites here
pass
#classmethod
def tearDownClass(cls) -> None:
# Do cleanup, close the driver, ...
pass
# Use the BaseTest class from before
# You can then duplicate this class for other suites of tests
class TestLogin(BaseTest):
#classmethod
def setUpClass(cls) -> None:
super(TestLogin, cls).setUpClass()
# Do things here that are needed only once (like loging in)
def setUp(self) -> None:
# This is executed before every test
pass
def testOne(self):
# Write your tests here
pass
def testTwo(self):
# Write your tests here
pass
def tearDown(self) -> None:
# This is executed after every test
pass
if __name__ == '__main__':
# Load the tests from the suite class we created
test_cases = unittest.defaultTestLoader.loadTestsFromTestCase(TestLogin)
# If you want do add more
test_cases.addTests(TestSomethingElse)
# Run the actual tests
unittest.TextTestRunner().run(test_cases)

How can I use FastAPI Routers with FastAPI-Users and MongoDB?

I can use MongoDB with FastAPI either
with a global client: motor.motor_asyncio.AsyncIOMotorClient object, or else
by creating one during the startup event per this SO answer which refers to this "Real World Example".
However, I also want to use fastapi-users since it works nicely with MongoDB out of the box. The downside is it seems to only work with the first method of handling my DB client connection (ie global). The reason is that in order to configure fastapi-users, I have to have an active MongoDB client connection just so I can make the db object as shown below, and I need that db to then make the MongoDBUserDatabase object required by fastapi-users:
# main.py
app = FastAPI()
# Create global MongoDB connection
DATABASE_URL = "mongodb://user:paspsword#localhost/auth_db"
client = motor.motor_asyncio.AsyncIOMotorClient(DATABASE_URL, uuidRepresentation="standard")
db = client["my_db"]
# Set up fastapi_users
user_db = MongoDBUserDatabase(UserDB, db["users"])
cookie_authentication = CookieAuthentication(secret='lame secret' , lifetime_seconds=3600, name='cookiemonster')
fastapi_users = FastAPIUsers(
user_db,
[cookie_authentication],
User,
UserCreate,
UserUpdate,
UserDB,
)
After that point in the code, I can import the fastapi_users Routers. However, if I want to break up my project into FastAPI Routers of my own, I'm hosed because:
If I move the client creation to another module to be imported into both my app and my routers, then I have different clients in different event loops and get errors like RuntimeError: Task <Task pending name='Task-4' coro=<RequestResponseCycle.run_asgi() running at /usr/local/lib/python3.8/site-packages/uvicorn/protocols/http/h11_impl.py:389> cb=[set.discard()]> got Future <Future pending cb=[_chain_future.<locals>._call_check_cancel() at /usr/local/lib/python3.8/asyncio/futures.py:360]> attached to a different loop (touched on in this SO question)
If I user the solutions of the "Real World Example", then I get stuck on where to build my fastapi_users object in my code example: I can't do it in main.py because there's no db object yet.
I considered making the MongoDBUserDatabase object as part of the startup event code (ie within async def connect_to_mongo() from the Real World Example), but I'm not able to get that to work either since I can't see how to make it work.
How can I either
make a global MongoDB client and FastAPI-User object in a way that can be shared among my main app and several routers without "attached to a different loop" errors, or
create fancy wrapper classes and functions to set up FastAPI users with the startup trigger?
I don't think my solution is complete or correct, but I figured I'd post it in case it inspires any ideas, I'm stumped. I have run into the exact dilemma, almost seems like a design flaw..
I followed this MongoDB full example and named it main.py
At this point my app does not work. The server starts up but result results in the aforementioned "attached to a different loop" whenever trying to query the DB.
Looking for guidance, I stumbled upon the same "real world" example
In main.py added the startup and shudown event handlers
# Event handlers
app.add_event_handler("startup", create_start_app_handler(app=app))
app.add_event_handler("shutdown", create_stop_app_handler(app=app))
In dlw_api.db.events.py this:
import logging
from dlw_api.user import UserDB
from fastapi import FastAPI
from fastapi_users.db.mongodb import MongoDBUserDatabase
from motor.motor_asyncio import AsyncIOMotorClient
LOG = logging.getLogger(__name__)
DB_NAME = "dlwLocal"
USERS_COLLECTION = "users"
DATABASE_URI = "mongodb://dlw-mongodb:27017" # protocol://container_name:port
_client: AsyncIOMotorClient = None
_users_db: MongoDBUserDatabase = None
def get_users_db() -> MongoDBUserDatabase:
return _users_db
async def connect_to_db() -> None:
global _users_db
# logger.info("Connecting to {0}", repr(DATABASE_URL))
client = AsyncIOMotorClient(DATABASE_URI)
db = client[DB_NAME]
collection = db[USERS_COLLECTION]
_users_db = MongoDBUserDatabase(UserDB, collection)
LOG.info(f"Connected to {DATABASE_URI}")
async def close_db_connection(app: FastAPI) -> None:
_client.close()
LOG.info("Connection closed")
And dlw_api.events.py:
from typing import Callable
from fastapi import FastAPI
from dlw_api.db.events import close_db_connection, connect_to_db
from dlw_api.user import configure_user_auth_routes
from fastapi_users.authentication import CookieAuthentication
from dlw_api.db.events import get_users_db
COOKIE_SECRET = "THIS_NEEDS_TO_BE_SET_CORRECTLY" # TODO: <--|
COOKIE_LIFETIME_SECONDS: int = 3_600
COOKIE_NAME = "c-is-for-cookie"
# Auth stuff:
_cookie_authentication = CookieAuthentication(
secret=COOKIE_SECRET,
lifetime_seconds=COOKIE_LIFETIME_SECONDS,
name=COOKIE_NAME,
)
auth_backends = [
_cookie_authentication,
]
def create_start_app_handler(app: FastAPI) -> Callable:
async def start_app() -> None:
await connect_to_db(app)
configure_user_auth_routes(
app=app,
auth_backends=auth_backends,
user_db=get_users_db(),
secret=COOKIE_SECRET,
)
return start_app
def create_stop_app_handler(app: FastAPI) -> Callable:
async def stop_app() -> None:
await close_db_connection(app)
return stop_app
This doesn't feel correct to me, does this mean all routes that use Depends for user-auth have to be included on the server startup event handler??
The author (frankie567) of fastapi-users created a repl.it showing a solution of sorts. My discussion about this solution may provide more context but the key parts of the solution are:
Don't bother using FastAPI startup trigger along with Depends for your MongDB connectivity management. Instead, create a separate file (ie db.py) to create your DB connection and client object. Import this db object whenever needed, like your Routers, and then use it as a global.
Also create a separate users.py to do 2 things:
Create globally used fastapi_users = FastAPIUsers(...) object for use with other Routers to handle authorization.
Create a FastAPI.APIRouter() object and attach all the fastapi-user routers to it (router.include_router(...))
In all your other Routers, import both db and fastapi_users from the above as needed
Key: split your main code up into
a main.py which only import uvicorn and serves app:app.
an app.py which has your main FastAPI object (ie app) and which then attaches all our Routers, including the one from users.py with all the fastapi-users routers attached to it.
By splitting up code per 4 above, you avoid the "attached to different loop" error.
I faced similar issue, and all I have to do to get motor and fastapi run in the same loop is this:
client = AsyncIOMotorClient()
client.get_io_loop = asyncio.get_event_loop
I did not set on_startup or whatsoever.

Debugging test cases when they are combination of Robot framework and python selenium

Currently I'm using Eclipse with Nokia/Red plugin which allows me to write robot framework test suites. Support is Python 3.6 and Selenium for it.
My project is called "Automation" and Test suites are in .robot files.
Test suites have test cases which are called "Keywords".
Test Cases
Create New Vehicle
Create new vehicle with next ${registrationno} and ${description}
Navigate to data section
Those "Keywords" are imported from python library and look like:
#keyword("Create new vehicle with next ${registrationno} and ${description}")
def create_new_vehicle_Simple(self,registrationno, description):
headerPage = HeaderPage(TestCaseKeywords.driver)
sideBarPage = headerPage.selectDaten()
basicVehicleCreation = sideBarPage.createNewVehicle()
basicVehicleCreation.setKennzeichen(registrationno)
basicVehicleCreation.setBeschreibung(description)
TestCaseKeywords.carnumber = basicVehicleCreation.save()
The problem is that when I run test cases, in log I only get result of this whole python function, pass or failed. I can't see at which step it failed- is it at first or second step of this function.
Is there any plugin or other solution for this case to be able to see which exact python function pass or fail? (of course, workaround is to use in TC for every function a keyword but that is not what I prefer)
If you need to "step into" a python defined keyword you need to use python debugger together with RED.
This can be done with any python debugger,if you like to have everything in one application, PyDev can be used with RED.
Follow below help document, if you will face any problems leave a comment here.
RED Debug with PyDev
If you are wanting to know which statement in the python-based keyword failed, you simply need to have it throw an appropriate error. Robot won't do this for you, however. From a reporting standpoint, a python based keyword is a black box. You will have to explicitly add logging messages, and return useful errors.
For example, the call to sideBarPage.createNewVehicle() should throw an exception such as "unable to create new vehicle". Likewise, the call to basicVehicleCreation.setKennzeichen(registrationno) should raise an error like "failed to register the vehicle".
If you don't have control over those methods, you can do the error handling from within your keyword:
#keyword("Create new vehicle with next ${registrationno} and ${description}")
def create_new_vehicle_Simple(self,registrationno, description):
headerPage = HeaderPage(TestCaseKeywords.driver)
sideBarPage = headerPage.selectDaten()
try:
basicVehicleCreation = sideBarPage.createNewVehicle()
except:
raise Exception("unable to create new vehicle")
try:
basicVehicleCreation.setKennzeichen(registrationno)
except:
raise exception("unable to register new vehicle")
...

Accessing cache.dat through ODBC

Ok, so I am trying to extract the information from a cache.dat database sent from another business. I am trying to get at the data using the ODBC. I am able to see the globals from the samples namespace when trying to export to Access, but I can't get the data from this new database to show up.
I've tried to tackle this problem two ways. First, I simply shut down Cache, replaced the
existing database in InterSystems\TryCache\mgr\samples and restart cache. Once I restart I can see all the globals in the Management Portal from the new database. If I test the ODBC connection from the Windows ODBC administrator it connects. However, when I try to pull them into an access database using ODBC there are no tables showing up to import.
I've also tried to add the database to my Cache but it gave me the error:
ERROR #5805: ID key not unique for extent 'Config.Databases'
I tried to fool around with the values in there but to no avail. This is my first time messing with anything like this and any, ANY help would be awesome.
If you access the Management Portal do you see any table definitions defined for your namespace. If not, the application was written in CacheObjectScript with no Classes created to provide Object/SQL access. If this is the case then it could be a fair amount of work to create the classes that describe the data(global structures.)
Matt,
Did the business that provided the CACHE.DAT file indicate that you should have ODBC access to the data?
Did they provide some document describing the data/globals? If they provided a document that describes the globals you could create the classes that map the data. Depending on what you want to do this could either be a resource intensive process or not.
If you want to directly access globals you can create a stored procedure that will do so. You should consider the security implications before you do this - it will expose all data in the global to anyone with ODBC access.
Here is an example of a stored procedure that returns the values of up to 9 global subscripts, plus the value at that node. You can modify it pretty easily if you need to.
Query OneGlobal(GlobalName As %String) As %Query(ROWSPEC = "NodeValue:%String,Sub1:%String,Sub2:%String,Sub3:%String,Sub4:%String,Sub5:%String,Sub6:%String,Sub7:%String,Sub8:%String,Sub9:%String") [SqlProc]
{
}
ClassMethod OneGlobalExecute(ByRef qHandle As %Binary, GlobalName As %String) As %Status
{
S qHandle="^"_GlobalName
Quit $$$OK
}
ClassMethod OneGlobalClose(ByRef qHandle As %Binary) As %Status [ PlaceAfter = OneGlobalExecute ]
{
Quit $$$OK
}
ClassMethod OneGlobalFetch(ByRef qHandle As %Binary, ByRef Row As %List, ByRef AtEnd As %Integer = 0) As %Status [ PlaceAfter = OneGlobalExecute ]
{
S Q=qHandle
S Q=$Q(#Q) b
I Q="" S Row="",AtEnd=1 Q $$$OK
S Depth=$QL(Q)
S $LI(Row,1)=$G(#Q)
F I=1:1:Depth S $LI(Row,I+1)=$QS(Q,I)
F I=Depth+1:1:9 S $LI(Row,I+1)=""
S AtEnd=0
S qHandle=Q
Quit $$$OK
}
I don't have code for you to get this from access, but for reference, to access this from python you might use (with pyodbc):
import pyodbc
import win32com.client
import urllib2
class CacheOdbcClient:
connectionString="DSN=MYCACHEDSN"
def __init__(self):
pass
def getGlobalAsOverlyLargeList(self):
connection=pyodbc.connect(self.connectionString)
cursor=connection.cursor()
cursor.execute("call MyPackageName.MyClassName_OneGlobal ?","MYGLOBAL")
list=[]
for row in cursor :
list.append((row.NodeValue,row.Sub1,row.Sub2,row.Sub3,row.Sub4,row.Sub5,row.Sub6,row.Sub7,row.Sub8,row.Sub9))
return list