'length error in the tickerplant kdb+/q - kdb

When I start up the tick.q with sym.q and feed.q with files provided as follows:
q tick.q sym -p 5010
q feed.q
Github links: https://github.com/KxSystems/cookbook/tree/master/start/tick ,
https://github.com/KxSystems/kdb-tick
The tickerplant process prints 'length error on every update, which usually occurs when incorrect number of elements is passed: https://code.kx.com/wiki/Errors
I suspect that this happens when the feed process calls .u.upd
Any suggestions as to how to solve this problem?

Entering \e 1 into the command line will suspend execution and run the debugger allowing you to see what failed and query the variables which should help pinpoint what is causing the issues.
More about debugging here https://code.kx.com/q/ref/debug/

If you are using the plain vanilla tick setup from KX there is no reason for that error to appear.
Also, I think you need to start the feed as feed.q -t 200 otherwise you will get no data coming through.
Usually the 'length error appears when the table schema does not match. So if you have the sym.q file (and it is loaded correctly) you should not have that issue.
Just to confirm this is the structure of your directory:
.
├── feed.q
├── README.md
├── tick
│   ├── r.q
│   ├── sym.q
│   └── u.q
└── tick.q
The sym.q file contains your table schema. If you change something in the feedhandler the table schema in the sym.q must match that change (i.e if you add a column in the feed you must also add a holder in the table for that column)

Open a new q session on some port (9999), add your schema definition there and define insert as .u.upd or something like this :
.u.upd:{[t;d]
.test.t:t;
.test.d:d;
t upsert d
}
Now point your feed to this q session and stream some data; this will enable you to analyse the test variables in case of the errors.

Related

TemplateNotFound in Airflow

I have the following dir structure
.
├── ConfigSpark.yaml
├── project1
│   ├── dags
│   │   └── dag_1.py
│   └── sparkjob
│   └── spark_1.py
└── sparkutils
I'm trying to import de ConfigSpark.yaml file in my SparkKubernetesOperator using:
job= SparkKubernetesOperator(
task_id = 'job',
params=dict(
app_name='job',
mainApplicationFile='/opt/airflow/dags/project1/sparkjob/spark_1.py',
driverCores=1,
driverCoreRequest='250m',
driverCoreLimit='500m',
driverMemory='2G',
executorInstances=1,
executorCores=2,
executorCoreRequest='1000m',
executorCoreLimit='1000m',
executorMemory='2G'
),
application_file='/opt/airflow/dags/ConfigSpark.yaml',
kubernetes_conn_id='conn_prd_eks',
do_xcom_push=True
)
My dag is returning the following error:
jinja2.exceptions.TemplateNotFound: /opt/airflow/dags/ConfigSpark.yaml
I've noticed that if the DAG is in the same directory of ConfigSpark.yaml my tasks run perfectly, but why my task is not running when I place my dag in a subfolder?
I've checked my values.yaml file and airflowHome is /opt/airflow and defaultAirflowRepository is apache/airflow.
What is happening?
Airflow searches for the template file (ConfigSpark.yaml in your case) from the directory in which the DAG file is stored. Therefore, it doesn't find it automatically with your code.
If you would store the template file in same folder your DAG file is stored in (/project1/dags), or a nested folder inside the /project1/dags folder, you can specify the path from there in your task:
job = SparkKubernetesOperator(
...,
application_file='/path/to/ConfigSpark.yaml',
...
)
Which would read the template file from /project1/dags/path/to/ConfigSpark.yaml.
However, if the folder your template file is stored in is not a child of the folder your DAG file is stored in, the above won't work. In that case you can specify template_searchpath on the DAG-level:
with DAG(..., template_searchpath="/opt/airflow/dags/repo/dags") as dag:
job = SparkKubernetesOperator(
task_id='job',
application_file='ConfigSpark.yaml',
...,
)
This path (for example /opt/airflow/dags) is added to the Jinja searchpath and that way ConfigSpark.yaml will be found.

How do I use the "Simics Training" and "QSP CPU" packages?

1 - There's a "Simics Training" package shown in the package manager, and a "targets\simics-user-training" and " targets\workshop-01". Where is the documentation about starting up and going through these trainings? (I assume this is different than just the normal "my-simics-project-1/documentation.html" documentation, because that documentation doesn't ever reference either of those targets in the Getting Started section)
2 - In the documentation there's a line: "The QSP-x86 package contains a legacy processor core which is used by default in the included simulated machines. To use more modern processors, the package QSP-CPU can be installed, which contains recent processor cores." How does one actually use the QSP-CPU to select a different CPU to be simulated? (Related: I see in the release notes a bunch of mentions of ICH10. Is that what the default QSP-x86 "targets\qsp-x86\firststeps.simics" is simulating? Ideally I'd like to simulate at least a PCH-based system.)
#Point 1
If you check the doc/ folder in your SImics project, you should have the lab instructions. It is a bit inconsistent that they are stand-alone PDFs, but that comes from how they are built currently. Look for nut-001 and workshop-01.
#Point 2 (and how come StackOverflow does not have heading styles? You can really use those to write nicely structured answers)
If you have installed everything, use the scripts "qsp-atom-core.simics" etc. to run the standard QSP setup but with a different type of core. For example:
> simics.bat targets\qsp-x86\qsp-client-core.simics
To see how that core is selected, open the script file. For example, to look at the client core script, first type/cat the trampoline script in the project. Then, go and open or cat or type the script file itself. For example:
C:\Users\jengblo\simics-projects\my-simics-project-5>type targets\qsp-x86\qsp-client-core.simics
# Auto-generated file. Any changes will be overwritten!
decl { substitute "C:\\Users\\jengblo\\AppData\\Local\\Programs\\Simics\\simics-qsp-cpu-6.0.1\\targets\\qsp-x86\\qsp-client-core.simics" }
run-command-file "C:\\Users\\jengblo\\AppData\\Local\\Programs\\Simics\\simics-qsp-cpu-6.0.1\\targets\\qsp-x86\\qsp-client-core.simics"
Given that trampoline, go to the actual script file:
C:\Users\jengblo\simics-projects\my-simics-project-5>type C:\\Users\\jengblo\\AppData\\Local\\Programs\\Simics\\simics-qsp-cpu-6.0.1\\targets\\qsp-x86\\qsp-client-core.simics
# In order to run this, the QSP-x86 (2096), QSP-CPU (8112) and
# QSP-Clear-Linux (4094) packages should be installed.
decl {
! Script that runs the Quick Start Platform (QSP) with a client processor core.
params from "%simics%/targets/qsp-x86/qsp-clear-linux.simics"
default cpu_comp_class = "x86-coffee-lake"
default num_cores = 4
}
run-command-file "%simics%/targets/qsp-x86/qsp-clear-linux.simics"
And note how the "cpu_comp_class" parameter is set. The way to find available such classes in a bit obscure, admittedly. In your running Simics session started from the client-core script (for example), check the types of the components inside the motherboard.
simics> list-components board.mb
┌─────────┬─────────────────────────┐
│Component│Class │
├─────────┼─────────────────────────┤
│cpu0 │processor_x86_coffee_lake│
│gpu │pci_accel_vga_comp │
│memory │simple_memory_module │
│nb │northbridge_x58 │
│sb │southbridge_ich10 │
└─────────┴─────────────────────────┘
Note the class of the cpu0 component. To find other classes from the same pattern, use the list-classes command:
simics> list-classes substr = processor_x86
The following classes are available:
┌─────────────────────────────┬──────────────────────────────┐
│ Class │ Short description │
├─────────────────────────────┼──────────────────────────────┤
│processor_x86QSP1 │N/A (module is not loaded yet)│
│processor_x86QSP2 │N/A (module is not loaded yet)│
│processor_x86_airmont │N/A (module is not loaded yet)│
│processor_x86_broadwell_xeon │N/A (module is not loaded yet)│
...
You can then build a custom script to start with a given core. Follow the pattern of "qsp-client-core.simics" as found in the installation. Copy that file into your project, and modify the core class as well as other parameters.

connect to kdb+ DB from a .q file

I am using kdb+ & nodeJS. I need to send queries from node to the db.
when I cd to the "db" directory and type q db I have candles set.
I created inside "db" folder a file called startServer.q:
\p 8080
h:hopen `:localhost:8080:user:pass
When I run startServer.q it opens but it seems that the candles variable is not set.
How can I access this table from that file? didn't find anything on the internet.
When you cd into the db folder and run q startServer.q the variable candles will not be set because it has not been loaded in, you just need to do:
q) \l /path/to/db
after you do q startServer.q and it will load in the table(s) in the db folder.
It would be a good idea to have startServer.q and db folder at the same level in your directory, i.e.
.
├── parent-directory
│ ├── db
│ └── startServer.q
then you could add the line
system["l db"];
to your startServer.q file and it would load in when you do q startServer.q.

How to execute sql file with Slick 3.0.0

I have a structure like this
src
└── main
├── resources
│ └── inserts.sql
└── my.package
└── Main.scala
In Main.scala I want to take the file inserts.sql and use Slick 3.0.0 to execute it on my db.
A SQL string can be executed directly by using the SQLActionBuilder class.
Also, since the BufferedSource object we get from Source.fromResource is Closable, we should wrap it with a Using block.
import slick.jdbc.SetParameter.SetUnit
import slick.jdbc.SQLActionBuilder
import scala.io.Source
import scala.util.Using
// ...
Using(Source.fromResource("inserts.sql")) { insertsSqlSource =>
val sqlActionBuilder = SQLActionBuilder(insertsSqlSource.mkString, SetUnit)
database.run(sqlActionBuilder.asUpdate)
}
You can read file content:
val query = scala.io.Source.fromResource("inserts.sql").mkString
and then create query using sql or sqlu interpolators:
//https://scala-slick.org/doc/3.0.0/sql.html
sql"$query".as[ExpectedType]
and run it as always :)
PS: Not tested. Don't have prepared env now.
Looks like there is no way to execute a sql file with Slick other then loading it into memory as a String and executing it with sql, sqlu or tsql.
Beware that in this case the $ interpolation is meant to insert bind variables into the query. To splice literal values into the query you must use #$ instead. Since in this case the variable is the whole query, we have to do
val inserts_sql = Source.fromResource("inserts.sql").mkString
db.run(sqlu"#$query")

Is possible to auto generate documentation for pytest tests?

I have a project which contains only pytest tests, without modules or classes, which test remote project.
E.g. structure ->
.
├── __init__.py
├── test_basic_auth_app.py
├── test_basic_auth_user.py
├── test_advanced_app_id.py
├── test_advanced_user.py
└── test_oauth_auth.py
Tests look like
"""
Service requires credentials (app_id, app_key) to be passed using the Basic Auth
"""
import base64
import pytest
import authorising.auth
from authorising.resources import Service
#pytest.fixture(scope="module")
def service_settings(service_settings):
"Set auth mode to app_id/app_key"
service_settings.update({"backend_version": Service.Auth_app})
return service_settings
def test_basic_auth_app_id_key(application):
"""Test client access with Basic HTTP Auth using app id and app key
Configure Api/Service to use App ID / App Key Authentication
and Basic HTTP Auth to pass the credentials.
"""
credentials = application.authobj.credentials
encoded = base64.b64encode(
f"{creds['app_id']}:{credentials['app_key']}".encode("utf-8")).decode("utf-8")
response = application.test_request()
assert response.status_code == 200
assert response.request.headers["Auth"] == "Basic %s" % encoded
Is it possible to auto generate documentation from docstrings e.g using Sphinx ?
You can use sphinx-apidoc to generate test-documentation automatically using python-docstrings
For instance, if you have directory structure like below
.
docs
|-- rst
|-- html
tests
├── __init__.py
├── test_basic_auth_app.py
├── test_basic_auth_user.py
├── test_advanced_app_id.py
├── test_advanced_user.py
└── test_oauth_auth.py
sphinx-apidoc -o docs/rst tests
sphinx-build -a -b html docs/rst docs/html -j auto
All Your docs HTML Files will be under docs/html.
There are multiple options sphinx-apidoc supports. Here is the [link]: https://www.sphinx-doc.org/en/master/man/sphinx-apidoc.html
When using sphinx, you should add your test-folder to the Python path in the conf.py file:
sys.path.insert(0, os.path.abspath(os.path.join(os.path.dirname(__file__), '..', 'tests')))
Then in each rst file you can simply write:
.. automodule:: test_basic_auth_app
:members:
If you want to document also the test results, please take a look into Sphinx-Test-Reports