]4[]5
I created one thing which access my database table name is sensordata from PostgreSQL. Now I have to send data to these table how. How can I do this?
I did the connection part of thingworx composer and PostgreSQL db on local setup.
I am trying to send sensor data from thingworx to PostgreSQL db but i am not able to sent it
You must to do two things:
1 Create a service to insert row in postgresql_conn thing;
Select 'SQL (Command)' as script type.
Put somthing like this into script area
INSERT INTO sensordata
(Temperature, Humidity, Vibration)
VALUES ([[TemperatureField]], [[HumidityField]], [[VibrationField]]);
TemperatureField, HumidityField, VibrationField are input fields of the service.
2 Create Subscriptions to sensordata thing.
As event set AnyDataChange;
Put something like this in the script area:
var params = {
TemperatureField: me.Temperature,
HumidityField: me.Humidity,
VibrationField: me.Vibration
};
var result = Things["postgresql_conn"].InsertRecord(params);
Now when data of sensordata change one row is add to the postgress table.
Sorry for my english
Related
We have "things" sending data to AWS IoT. A rule forwards the payloads to a Lambda which is responsible for inserting or updating the data into Postgres (AWS RDS). The Lambda is written in python and uses PG8000 for interacting with the db. The lambda event looks like this:
{
"event_uuid": "8cd0b9b1-be93-49f8-1234-af4381052672",
"date": "2021-07-08T16:09:25.138809Z",
"serial_number": "a1b2c3",
"temp": "34"
}
Before inserting the data into Postgres, a query is run on the table to look for any existing event_uuids which are required to be unique. For a specific reason, there is no UNIQUE constraint on the event_uuid column. If the event_uuid does not exist, the data is inserted. If the event_uuid does exist, the data is updated. This all works great, except for the following case.
THE ISSUE: one of our things is sending two of the same payloads in very quick succession. It's an issue with one of our things but it's not something we can resolve at the moment and we need to account for it. Here are the timestamps from CloudWatch of when each payload was received:
2021-07-08T12:10:09.288-04:00
2021-07-08T12:10:09.772-04:00
As a result of the payloads being received 484ms apart, the Lambda is inserting both payloads instead of inserting the first and performing an update with the second one.
Any ideas on how to get around this?
Here is part of the Lambda code...
conn = make_conn()
event_query = f"""
SELECT json_build_object('uuid', uuid)
FROM samples
WHERE event_uuid='{event_uuid}'
AND serial_number='{serial_number}'
"""
event_resp = fetch_one(conn, event_query)
if event_resp:
update_sample_query = f"""
UPDATE samples SET temp={temp} WHERE uuid='{event_resp["uuid"]}'
"""
else:
insert_sample_query = f"""
INSERT INTO samples (uuid, event_uuid, temp)
VALUES ('{uuid4()}', '{event_uuid}', {temp})
"""
I am trying to define a model that is based on the PersistedModel to access a table in DB2, call it MY_SCHEMA.MY_TABLE.
I created the model MY_TABLE, based on PersistedModel, with a Data Source (datasources.json) where the definition includes the attribute "schema": "MY_SCHEMA". The data source also contains the userid my_userid, used for the connection.
Current Behavior
When I try to call the API for this model, it tries to access the table my_userid.MY_TABLE.
Expected Behavior
It should access MY_SCHEMA.MY_TABLE.
The DB2 instance happens to be on a System Z. I have created a table called my_userid.MY_TABLE and that will work, however for the solution we are trying to build, there are multiple schemas required.
Note that this only appears to be an issue with Db2 on System Z. I can change schemas on Db2 LUW.
What LoopBack connector are you using? What version? Can you also check what version of loopback-ibmdb is installed in your node_modules folder?
AFAICT, LoopBack's DB2-related connectors support schema field, see https://github.com/strongloop/loopback-ibmdb/blob/master/lib/ibmdb.js#L96-L100
self.schema = this.username;
if (settings.schema) {
self.schema = settings.schema.toUpperCase();
}
self.connStr += ';CurrentSchema=' + self.schema;
Have you considered configuring the database connection using DSN instead of individual fields like hostname and username?
In your datasource config JSON:
"dsn": "DATABASE={dbname};HOSTNAME={hostname};UID={username};PWD={password};CurrentSchema=MY_SCHEMA"
I am using a Java based program and I am writing a simple select query inside that program to retrieve data from the PostgreSQL database. The data come with the header which is an error for the rest of my codes.
How do I get rid of all column headings in an SQL query? I just want to
print out the raw data without any headings.
I am using Building Controls Virtual Test Bed (BCVTB) to connect my database to EnergyPlus. This BCVTB has a database actor that you can write a query in it and receive data and send it to your other simulation program. I decided to use PostgreSQL. however when I write Select * From mydb, it brings data with the column names (header). I just want raw data without header. what should I do?
PostgreSQL does not send table headings, not like a CSV file. The protocol (as used via JDBC) sends the rows. The driver does request a description of the rows that includes column names, but it is not part of the result set rows like the "header first" convention for CSV.
Whatever is happening must be a consequence of the BCVTB tools you are using, and I suggest pursuing it on that side of things.
In my application as I release a new version, I alter my existing database tables by adding a new table or altering existing tables in SQlite.
I have written the same in script/text file and want to just import in a form of batch directly into existing database where these queries will execute once.
I know that i can do same by writing each alter query separately but this will also increase execution time and time for writing question.
Any ideas on how I can achieve this?
One thing that I used to was to keep an array of colums for each table, like
persons = {[ fname, lname, address, zip ]}
then I also have another version array, that tells me I have for version 1, persons, 4 colums.
Then when I updated the application, and add f.ex. gsm to the persons, I update the array and the count. Then I run the query on the database sqlite_master, parse the data
you can run '.schema persons' to get the create statement. This is just work you do once, and you never run alter table on tables up to date this way. You need to be organized.
One of my users wants to get data into Excel from SQL 2008 query/stored proc.
I never actually did it before.
I tried a sample using ADO and got data but user reasonably asked - where are the column names?
How do I connect a spreadsheet to an SQL resultset and get it with column names?
Apparently the field names are in the recordset object already.. just needed to pull them out.
i = 1
For Each objField In rs.Fields
Sheet1.Cells(1, i) = objField.Name
i = i + 1
Next objField
I don't know which version of Excel you are using but in Excel 2007 you can just connect to the SQL DB by going to Data -> From Other Sources -> From SQL Server. After you select your server and database, your connection will be created. Then you can edit it (Data -> Connections -> Properties) where in the Definition tab you change the Command type to SQL and enter your query in the Command text box. You can also create a view on the server and just point to that from Excel.
This should do it unless I misunderstood your question.