Prisma migration to MySQL on localhost Fails - prisma

I am new to Prisma & MySQL. I am trying to migrate some sample tables to MySQL. Please, I need your help...
File - schema.prisma
generator client {
provider = "prisma-client-js"
}
datasource db {
provider = "mysql"
url = env("DATABASE_URL")
}
...
File - .env
DATABASE_URL=mysql://admin:admin#localhost:3306/nss
...
MySQL Shell
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
mysql> select user();
+-----------------+
| user() |
+-----------------+
| admin#localhost |
+-----------------+
1 row in set (0.00 sec)
mysql> SHOW GLOBAL VARIABLES LIKE 'PORT';
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| port | 3306 |
+---------------+-------+
1 row in set (0.25 sec)
mysql>
package.json
{
"name": "node-graphql",
"version": "1.0.0",
"license": "MIT",
"scripts": {
"dev": "nodemon src/server.js"
},
"prettier": {
"semi": false,
"singleQuote": true,
"trailingComma": "all"
},
"dependencies": {
"apollo-server": "3.6.3",
"graphql": "15.8.0",
"graphql-scalars": "1.14.1",
"nexus": "1.2.0"
},
"devDependencies": {
"#prisma/client": "^3.12.0",
"nodemon": "2.0.15",
"prisma": "^3.12.0"
},
"prisma": {
"seed": "node prisma/seed.js"
}
}
In VS-Code Terminal, I issued the command
npx prisma migrate dev
Got the message: Error: Migration engine error:

Related

How can I start nest app with Postgress DB on Win 10?

I'm front dev and I need to test locally my front app with backend (nest js) and postgresql DB. Who can write me the right way How to run and connect to DB ? I get some errors on app start. I work on win 10 and there is my steps for start this app.
install postgresql
npm install for my nest js app
run pgAdmin4 and create DB for my app
npm start
There is my ormconfig
module.exports = {
"type": "postgres",
"host": process.env.POSTGRES_HOST || "localhost",
"port": process.env.POSTGRES_PORT || 5432,
"username": process.env.POSTGRES_USER || "", //<- Here I try to set all possible username
"password": process.env.POSTGRES_PASSWORD || "", //<- Here I try to set all possible password
"database": process.env.POSTGRES_DB || "my_database",
"entities": ["dist/**/*.entity{.ts,.js}"],
"synchronize": true,
"logging": true
}
There is error that I encountered
error
Also on other computer I try to do this and I get error like
[Nest] ERROR [TypeOrmModule] Unable to connect to the database.
FATAL: password authentication failed for user "postgres" (postgresql 14 with pgAdmin 4)
In your typeormconfig.ts , you should write this:
export class PostgresTypeormConfiguration implements TypeOrmOptionsFactory
{
createTypeOrmOptions(connectionName?: string): TypeOrmModuleOptions | Promise<TypeOrmModuleOptions> {
const TypeOrmOptions:TypeOrmModuleOptions=
{
type: "postgres",
host: process.env.POSTGRES_HOST ,
port: process.env.POSTGRES_PORT ,
username: process.env.POSTGRES_USER ,
password: process.env.POSTGRES_PASSWORD ,
database: process.env.POSTGRES_DB,
entities: ["dist/**/*.entity{.ts,.js}"],
synchronize: true,
logging: true
}
return TypeOrmOptions
}
}
and you should define this in your module like this:
#Module({
imports:[TypeOrmModule.forRootAsync({useClass:PostgresTypeormConfiguration})]
})
note: if you still got an error , you wrote one of the config option wrong in your .env file or you did not define .env file in your configModule

api.targets is not a function in plugin-proposal-object-rest-spread

I have this error when I try to compile my app. The error comes from babel-plugin-proposal-object-rest-spread here https://github.com/babel/babel/blob/6e551ae8827d064680c1344074db9fb3093967e9/packages/babel-plugin-proposal-object-rest-spread/src/index.js#L22 :
Trace: error TypeError: api.targets is not a function
| at /home/username/Documents/front-newlook/node_modules/next/node_modules/#babel/preset-env/node_modules/#babel/plugin-proposal-object-rest-spread/lib/index.js:38:25
| at /home/username/Documents/front-newlook/node_modules/next/node_modules/#babel/preset-env/node_modules/#babel/helper-plugin-utils/lib/index.js:19:12
| at /home/username/Documents/front-newlook/node_modules/next/node_modules/#babel/core/lib/config/full.js:166:14
| at cachedFunction (/home/username/Documents/front-newlook/node_modules/next/node_modules/#babel/core/lib/config/caching.js:32:19)
| at loadPluginDescriptor (/home/username/Documents/front-newlook/node_modules/next/node_modules/#babel/core/lib/config/full.js:201:28)
| at /home/username/Documents/front-newlook/node_modules/next/node_modules/#babel/core/lib/config/full.js:71:20
My babel.config.json is like this:
{
"plugins": [
"emotion",
"macros",
"#babel/plugin-proposal-class-properties"
],
"presets": ["next/babel"],
"env": {
"test": {
"plugins": ["require-context-hook"]
}
}
}
Does anyone has an idea? Thank you
The problem was solved when I removed the node_modules folder and then ran yarn install
Run "npm install" once, It will fix the issue.

AWS CLI : How to get the API Gateway ID

Is there a way/possible on how we can get the API GAteway ID by name or can we iterate the list and return by its name from AWS CLI, i tried the following way and it doesn't return any thing
aws apigateway get-rest-apis --query 'items[?name==`TestAPI`].value' --output text --region us-east-1
thanks in advance
Updated the list output
"items": [
{
"id": "5aa9gcij77",
"name": "JavaLamdba",
"description": "JavaLamdba",
"createdDate": 1608225655,
"apiKeySource": "HEADER",
"endpointConfiguration": {
"types": [
"REGIONAL"
]
}
},
aws apigateway get-rest-apis --query 'items[?name==`JavaLamdba`].id' --output text --region us-east-1
This should give you the expected result
> select name, description, created_date from aws.aws_api_gateway_rest_api where name = 'lambda-test';
+-------------+-------------+---------------------+
| name | description | created_date |
+-------------+-------------+---------------------+
| lambda-test | lambda-test | 2019-07-25 09:05:16 |
+-------------+-------------+---------------------
https://steampipe.io/

Problem when querying Raw Data with STH-Comet - Returns empty

I have Orion, Cygnus and STH-Comet(installed and configured in formal mode). Each component is in a container docker. I implemented the infrastructure with docker-compose.yml.
The Cygnus container is configured as follows:
image: fiware/cygnus-ngsi:latest
hostname: cygnus
container_name: cygnus
volumes:
- /home/ubuntu/cygnus/multisink_agent.conf:/opt/fiware-cygnus/docker/cygnus-ngsi/multisink_agent.conf
depends_on:
- mongo
networks:
- default
expose:
- "5050"
- "5080"
ports:
- "5050:5050"
- "5080:5080"
environment:
- CYGNUS_SERVICE_PORT=5050
- CYGNUS_MONITORING_TYPE=http
- CYGNUS_AGENT_NAME=cygnus-ngsi
- CYGNUS_MONGO_SERVICE_PORT=5050
- CYGNUS_MONGO_HOSTS=mongo:27017
- CYGNUS_MONGO_USER=
- CYGNUS_MONGO_PASS=
- CYGNUS_MONGO_ENABLE_ENCODING=false
- CYGNUS_MONGO_ENABLE_GROUPING=false
- CYGNUS_MONGO_ENABLE_NAME_MAPPINGS=false
- CYGNUS_MONGO_DATA_MODEL=dm-by-entity
- CYGNUS_MONGO_ATTR_PERSISTENCE=column
- CYGNUS_MONGO_DB_PREFIX=sth_
- CYGNUS_MONGO_COLLECTION_PREFIX=sth_
- CYGNUS_MONGO_ENABLE_LOWERCASE=false
- CYGNUS_MONGO_BATCH_TIMEOUT=30
- CYGNUS_MONGO_BATCH_TTL=10
- CYGNUS_MONGO_DATA_EXPIRATION=0
- CYGNUS_MONGO_COLLECTIONS_SIZE=0
- CYGNUS_MONGO_MAX_DOCUMENTS=0
- CYGNUS_MONGO_BATCH_SIZE=1
- CYGNUS_LOG_LEVEL=DEBUG
- CYGNUS_SKIP_CONF_GENERATION=false
- CYGNUS_STH_ENABLE_ENCODING=false
- CYGNUS_STH_ENABLE_GROUPING=false
- CYGNUS_STH_ENABLE_NAME_MAPPINGS=false
- CYGNUS_STH_DB_PREFIX=sth_
- CYGNUS_STH_COLLECTION_PREFIX=sth_
- CYGNUS_STH_DATA_MODEL=dm-by-entity
- CYGNUS_STH_ENABLE_LOWERCASE=false
- CYGNUS_STH_BATCH_TIMEOUT=30
- CYGNUS_STH_BATCH_TTL=10
- CYGNUS_STH_DATA_EXPIRATION=0
- CYGNUS_STH_BATCH_SIZE=1
Obs: In the multisink_agent.conf file I changed the service and the servicepath:
cygnus-ngsi.sources.http-source-mongo.handler.default_service = tese
cygnus-ngsi.sources.http-source-mongo.handler.default_service_path = /iot
And the STH-Comet container looks like this:
image: fiware/sth-comet:latest
hostname: sth
container_name: sth
depends_on:
- cygnus
- mongo
networks:
- default
expose:
- "8666"
ports:
- "8666:8666"
environment:
- STH_HOST=0.0.0.0
- STH_PORT=8666
- DB_URI=mongo:27017
- DB_USERNAME=
- DB_PASSWORD=
- LOGOPS_LEVEL=DEBUG
In the STH-Comet config.js file I enabled CORS and I changed the defaultService and the defaultServicePath. The file looks like this:
var config = {};
// STH server configuration
//--------------------------
config.server = {
host: 'localhost',
port: '8666',
// Default value: "testservice".
defaultService: 'tese',
// Default value: "/testservicepath".
defaultServicePath: '/iot',
filterOutEmpty: 'true',
aggregationBy: ['day', 'hour', 'minute'],
temporalDir: 'temp',
maxPageSize: '100'
};
// Cors Configuration
config.cors = {
// The enabled is use to set CORS policy
enabled: 'true',
options: {
origin: ['*'],
headers: [
'Access-Control-Allow-Origin',
'Access-Control-Allow-Headers',
'Access-Control-Request-Headers',
'Origin, Referer, User-Agent'
],
additionalHeaders: ['fiware-servicepath', 'fiware-service'],
credentials: 'true'
}
};
// Database configuration
//------------------------
config.database = {
dataModel: 'collection-per-entity',
user: '',
password: '',
authSource: '',
URI: 'localhost:27017',
replicaSet: '',
prefix: 'sth_',
collectionPrefix: 'sth_',
poolSize: '5',
writeConcern: '1',
shouldStore: 'both',
truncation: {
expireAfterSeconds: '0',
size: '0',
max: '0'
},
ignoreBlankSpaces: 'true',
nameMapping: {
enabled: 'false',
configFile: './name-mapping.json'
},
nameEncoding: 'false'
};
// Logging configuration
//------------------------
config.logging = {
level: 'debug',
format: 'pipe',
proofOfLifeInterval: '60',
processedRequestLogStatisticsInterval: '60'
};
module.exports = config;
I use Cygnus to persist historical data. STH-Comet is used only to query raw and aggregated data.
Cygnus' signature on Orion did this:
"description": "A subscription All Entities",
"subject": {
"entities": [
{
"idPattern": ".*"
}
],
"condition": {
"attrs": []
}
},
"notification": {
"http": {
"url": "http://cygnus:5050/notify"
},
"attrs": [],
"attrsFormat":"legacy"
},
"expires": "2040-01-01T14:00:00.00Z",
"throttling": 5
}
The headers used for fiware-service and fiware-servicepath are:
Fiware-service: tese
Fiware-servicepath: /iot
The entities data are stored in orion-tese. I have the collection: entities
{
"_id" : {
"id" : "Tank1",
"type" : "Tank",
"servicePath" : "/iot"
},
"attrNames" : [
"temperature"
],
"attrs" : {
"temperature" : {
"value" : 0.333,
"type" : "Float",
"mdNames" : [ ],
"creDate" : 1594334464,
"modDate" : 1594337770
}
},
"creDate" : 1594334464,
"modDate" : 1594337771,
"lastCorrelator" : "f86d0d74-c23c-11ea-9c82-0242ac1c0005"
}
The raw and aggregated data are stored in sth_tese.
I have the collections:
sth_/iot_Tank1_Tank.aggr
and
sth_/iot_Tank1_Tank
The sth_/iot_Tank1_Tank raw data is in mongoDB:
{
"_id" : ObjectId("5f079d0369591c06b0fc981a"),
"temperature" : 279,
"recvTime" : ISODate("2020-07-09T22:41:05.670Z")
}
{
"_id" : ObjectId("5f07a9eb69591c06b0fc981b"),
"temperature" : 0.333,
"recvTime" : ISODate("2020-07-09T23:36:11.160Z")
}
When I run: http://localhost:8666/STH/v1/contextEntities/type/Tank/id/Tank1/attributes/temperature?aggrMethod=sum&aggrPeriod=minute
or
http://localhost:8666/STH/v2/entities/Tank1/attrs/temperature?type=Tank&aggrMethod=sum&aggrPeriod=minute
I have the result: "sum": 279 and "sum": 0.333. I can recover ALL the aggregated data, max, min, sum, sum2.
The difficulty is with the STH-Comet when I try to retrieve the raw data, the return code is 200 and the value returns empty.
I've tried with APIs v1 and v2, to no avail.
request with v2:
http://sth:8666/STH/v2/entities/Tank1/attrs/temperature?type=Tank&lastN=10
Return
{
"type": "StructuredValue",
"value": []
}
request with v1:
http://sth:8666/STH/v1/contextEntities/type/Tank/id/Tank1/attributes/temperature?lastN=10
Return
{
"contextResponses": [{
"contextElement": {
"attributes": [{
"name": "temperature",
"values": []
}],
"id": "Tank1",
"isPattern": false,
"type": "Tank"
},
"statusCode": {
"code": "200",
"reasonPhrase": "OK"
}
}]
}
The STH-Comet log shows that it is online and connects to the correct database:
time=2020-07-09T22:39:06.698Z | lvl=INFO | corr=n/a | trans=n/a | op=OPER_STH_DB_CONN_OPEN | from=n/a | srv=n/a | subsrv=n/a | comp=STH | msg=Establishing connection to the database at mongodb://#mongo:27017/sth_tese
time=2020-07-09T22:39:06.879Z | lvl=INFO | corr=n/a | trans=n/a | op=OPER_STH_DB_CONN_OPEN | from=n/a | srv=n/a | subsrv=n/a | comp=STH | msg=Connection successfully established to the database at mongodb://#mongo:27017/sth_tese
time=2020-07-09T22:39:07.218Z | lvl=INFO | corr=n/a | trans=n/a | op=OPER_STH_SERVER_START | from=n/a | srv=n/a | subsrv=n/a | comp=STH | msg=Server started at http://0.0.0.0:8666
The STH-Comet log with the api v2 request:
time=2020-07-09T23:46:47.400Z | lvl=DEBUG | corr=998811d9-fac2-4701-b37c-bb9ae1b45b81 | trans=998811d9-fac2-4701-b37c-bb9ae1b45b81 | op=OPER_STH_GET | from=n/a | srv=tese | subsrv=/iot | comp=STH | msg=GET /STH/v2/entities/Tank1/attrs/temperature?type=Tank&lastN=10
time=2020-07-09T23:46:47.404Z | lvl=DEBUG | corr=998811d9-fac2-4701-b37c-bb9ae1b45b81 | trans=998811d9-fac2-4701-b37c-bb9ae1b45b81 | op=OPER_STH_GET | from=n/a | srv=tese | subsrv=/iot | comp=STH | msg=Getting access to the raw data collection for retrieval...
time=2020-07-09T23:46:47.408Z | lvl=DEBUG | corr=998811d9-fac2-4701-b37c-bb9ae1b45b81 | trans=998811d9-fac2-4701-b37c-bb9ae1b45b81 | op=OPER_STH_GET | from=n/a | srv=tese | subsrv=/iot | comp=STH | msg=The raw data collection for retrieval exists
time=2020-07-09T23:46:47.412Z | lvl=DEBUG | corr=998811d9-fac2-4701-b37c-bb9ae1b45b81 | trans=998811d9-fac2-4701-b37c-bb9ae1b45b81 | op=OPER_STH_GET | from=n/a | srv=tese | subsrv=/iot | comp=STH | msg=No raw data available for the request: /STH/v2/entities/Tank1/attrs/temperature?type=Tank&lastN=10
time=2020-07-09T23:46:47.412Z | lvl=DEBUG | corr=998811d9-fac2-4701-b37c-bb9ae1b45b81 | trans=998811d9-fac2-4701-b37c-bb9ae1b45b81 | op=OPER_STH_GET | from=n/a | srv=tese | subsrv=/iot | comp=STH | msg=Responding with no points
According to the log, it establishes the connection to recover the raw data: msg=Getting access to the raw data collection for retrieval.... Confirms that the raw data exists: msg=The raw data collection for retrieval exists. But, it cannot recover this data and generates the message that the raw data is not available and does not return any points:msg=No raw data available for the request and msg=Responding with no points.
I already read the configuration part in the documentation. I've reinstalled everything, several times. I combed all settings and I can't find anything to justify this problem.
What could it be?
Could someone with expertise in STH-Comet give any guidance?
Thanks!
Sometimes the way in which STH tries to recover information doesn't match to the way in wich Cygnus store it. However, that doesn't to be the case here. The datamodel used by STH is configured with config.database.dataModel and it seems to be correct: collection-per-entity (as you have collections like sth_/iot_Tank1_Tank, which correspondds to a single entity, i.e. the one with id Tank1 and type Tank).
Assuming that the setting in config.js is not being overridden by DATA_MODEL env var (although it would be wise to check that, looking to the env vars actuallly inyected to the docker container running STH, I guess that with docker inspect) the only way I think we can continue debugging is to inspect which actual query does STH on MongoDB to end in No raw data available for the request.
MongoDB has a profiler that allows to record every query done in the DB. Thus the procedure would be as follows:
Avoid (or minimize) any other usage of MongoDB instance, to avoid "noise" in the information recorded by the profiler
Start the profiler in "all queries" mode (i.e. profiling level 2)
Do the query at STH API
Stop the profiler
Check the queries recorded by the profiler as a consequence of the request done in step 3
Explaining the usage of the MongoDB profiler is out of the scope of this answer, but the reference I provided above is a good starting point if you don't know it already.
Once you have information about the queries, please provide feedback as comments to this answers. Thanks!

Importing OPoint data into OrientDB 2.2.x using ETL from a CSV file

This is related to my earlier questions
Spatial query with sub-select (I figured ths one out)
OrientDB spatial query to find all pairs within X km of each other (still looking for a useful answer)
In response to (2), I am looking at modifying my nazca geoglyph dataset to use the WKT version to be consistent with the newer OrientDB 2.2.x Spatial Index functionality.
My input CSV file, nazca_lines_wkt.csv is this:
Name,Location
Hummingbird,POINT(-75.148892 -14.692131)
Monkey,POINT(-75.138532 -14.706940)
Condor,POINT(-75.126208 -14.697444)
Spider,POINT(-75.122381 -14.694145)
Spiral,POINT(-75.122746 -14.688277)
Hands,POINT(-75.113881 -14.694459)
Tree,POINT(-75.114520 -14.693898)
Astronaut,POINT(-75.079755 -14.745222)
Dog,POINT(-75.130788 -14.706401)
Wing,POINT(-75.100385 -14.680309)
Parrot,POINT(-75.107498 -14.689463)
I create an empty PLOCAL database, nazca-wkt.orientdb and define a GeoGlyphWKT vertex class:
CREATE DATABASE PLOCAL:nazca-wkt.orientdb admin admin plocal graph
CREATE CLASS GeoGlyphWKT EXTENDS V
CREATE PROPERTY GeoGlyphWKT.Name STRING
CREATE PROPERTY GeoGlyphWKT.Location EMBEDDED OPoint
CREATE PROPERTY GeoGlyphWKT.Tag EMBEDDEDSET STRING
I have two .json files that I use for the oetl script:
nazca_lines_wkt.json
{
"config": {
"log": "info",
"fileDirectory": "./",
"fileName": "nazca_lines_wkt.csv"
}
}
commonGeoGlyphWKT.json
{
"begin": [ { "let": { "name": "$filePath", "expression": "$fileDirectory.append($fileName )" } } ],
"config": { "log": "debug" },
"source": { "file": { "path": "$filePath" } },
"extractor":
{
"csv": { "ignoreEmptyLines": true,
"nullValue": "N/A",
"separator": ",",
"columnsOnFirstLine": true,
"dateFormat": "yyyy-MM-dd"
}
},
"transformers": [ { "vertex": { "class": "GeoGlyphWKT" } } ],
"loader": {
"orientdb": {
"dbURL": "plocal:nazca-wkt.orientdb",
"dbType": "graph",
"batchCommit": 1000
}
}
}
I run oetl using this command:
$ oetl.sh commonGeoGlyphWKT.json nazca_lines_wkt.json
but this fails with the following output:
$ oetl.sh commonGeoGlyphWKT.json nazca_lines_wkt.json
OrientDB etl v.2.2.13 (build 2.2.x#r90d7caa1e4af3fad86594e592c64dc1202558ab1; 2016-11-15 12:04:05+0000) www.orientdb.com
BEGIN ETL PROCESSOR
[file] INFO Reading from file ./nazca_lines_wkt.csv with encoding UTF-8
Started execution with 1 worker threads
Error in Pipeline execution: com.orientechnologies.orient.core.exception.OValidationException: impossible to convert value of field "Location"
DB name="nazca-wkt.orientdb"
ETL process has problem: java.util.concurrent.ExecutionException: com.orientechnologies.orient.core.exception.OValidationException: impossible to convert value of field "Location"
DB name="nazca-wkt.orientdb"
END ETL PROCESSOR
+ extracted 9 rows (0 rows/sec) - 9 rows -> loaded 0 vertices (0 vertices/sec) Total time: 16ms [0 warnings, 1 errors]
I'm sure it's something silly that I'm missing... has anyone been able to import CSV files containing WKT strings for points, polygons, etc using ETL?
Any help is appreciated!
this is working for me:
commonGeoGlyphWKT.json
{
"source": { "file": { "path": "./nazca_lines_wkt.csv" } },
"extractor": { "csv": {
"separator": ",",
"columns": ["Name:String","Location:String"] } },
"transformers": [
{ "command": { "command": "INSERT INTO GeoGlyphWKT(Name,Location) values('${input.Name}', St_GeomFromText('${input.Location}'))"} }
],
"loader": {
"orientdb": {
"dbURL": "plocal:/home/ivan/OrientDB/db_installati/enterprise/orientdb-enterprise-2.2.13/databases/stack40982509-spatial",
"dbUser": "admin",
"dbPassword": "admin",
"dbType": "graph",
"batchCommit": 1000
}
}
}
nazca_lines_wkt.csv
Name,Location
Hummingbird,POINT (-75.148892 -14.692131)
Monkey,POINT (-75.138532 -14.706940)
Condor,POINT(-75.126208 -14.697444)
Spider,POINT(-75.122381 -14.694145)
Spiral,POINT(-75.122746 -14.688277)
Hands,POINT(-75.113881 -14.694459)
Tree,POINT(-75.114520 -14.693898)
Astronaut,POINT(-75.079755 -14.745222)
Dog,POINT(-75.130788 -14.706401)
Wing,POINT(-75.100385 -14.680309)
Parrot,POINT(-75.107498 -14.689463)
[ivan#canemagico-pc bin]$ ./oetl.sh commonGeoGlyphWKT2.json
OrientDB etl v.2.2.13 (build 2.2.x#r90d7caa1e4af3fad86594e592c64dc1202558ab1; 2016-11-15 12:04:05+0000) www.orientdb.com
[csv] INFO column types: {Name=STRING, Location=STRING}
BEGIN ETL PROCESSOR
[file] INFO Reading from file ./nazca_lines_wkt.csv with encoding UTF-8
Started execution with 1 worker threads
[orientdb] INFO committing
END ETL PROCESSOR
+ extracted 11 rows (0 rows/sec) - 11 rows -> loaded 11 vertices (0 vertices/sec) Total time: 244ms [0 warnings, 0 errors]
orientdb {db=stack40982509-spatial}> select from GeoGlyphWKT
+----+-----+-----------+-----------+-----------------------+
|# |#RID |#CLASS |Name |Location |
+----+-----+-----------+-----------+-----------------------+
|0 |#25:0|GeoGlyphWKT|Hummingbird|OPoint{coordinates:[2]}|
|1 |#25:1|GeoGlyphWKT|Spiral |OPoint{coordinates:[2]}|
|2 |#25:2|GeoGlyphWKT|Dog |OPoint{coordinates:[2]}|
|3 |#26:0|GeoGlyphWKT|Monkey |OPoint{coordinates:[2]}|
|4 |#26:1|GeoGlyphWKT|Hands |OPoint{coordinates:[2]}|
|5 |#26:2|GeoGlyphWKT|Wing |OPoint{coordinates:[2]}|
|6 |#27:0|GeoGlyphWKT|Condor |OPoint{coordinates:[2]}|
|7 |#27:1|GeoGlyphWKT|Tree |OPoint{coordinates:[2]}|
|8 |#27:2|GeoGlyphWKT|Parrot |OPoint{coordinates:[2]}|
|9 |#28:0|GeoGlyphWKT|Spider |OPoint{coordinates:[2]}|
|10 |#28:1|GeoGlyphWKT|Astronaut |OPoint{coordinates:[2]}|
+----+-----+-----------+-----------+-----------------------+
11 item(s) found. Query executed in 0.013 sec(s).