missing data for column xxx - postgresql

I've downloaded multiple metro extracts from openstreetmap as PBF files when i try to import them with osm2pgsql it works for the first and creates the tables. I then want to add a column in the planet_osm_ways with a cityID to know which "way id" belonged to which city after i then try to import another city it says 'ERROR: Missing data for column "city_id". is there a way to modify the planet_osm_ways table without breaking the script? I really need to know which id belonged to which metro extract.

You need to edit the style file (default.style, possibly in osm2pgsql-bin directory) used by osm2pgsql.
You can then add the instruction
#Add custom column
node,way citiid int4 linear
The column will be created, and - provided no tag has this name - will not be populated. You are then free to populate it as you want.

Related

ADF Pipeline include fixed text in output

The overall aim of the pipeline is to copy from XML to Oracle.
One of the source columns is a datetime that needs formatting, so I'm using an intermediate copy activity to copy from XML to CSV as instructed in this answer
From the CSV to the table is simple mapping except for the need for an additional target column with a fixed value of '365Response'
I've tried adding this as an additional column as shown below:
However, on the mapping tab, I'm not able to select the new additional column:
What did I do wrong?
Your process to add an Additional column in the copy activity looks correct. If the Additional column is not showing in the mapping, you can clear the mapping and import schema again to refresh the mapping.

How to copy data from an a csv to Azure SQL Server table?

I have a dataset based on a csv file. This exposes a data as follows:
Name,Age
John,23
I have an Azure SQL Server instance with a table named: [People]
This has columns
Name, Age
I am using the Copy Data task activity and trying to copy data from the csv data set into the azure table.
There is no option to indicate the table name as a source. Instead I have a space to input a Stored Procedure name?
How does this work? Where do I put the target table name in the image below?
You should DEFINITELY have a table name to write to. If you don't have a table, something is wrong with your setup. Anyway, make sure you have a table to write to; make sure the field names in your table match the fields in the CSV file. Then, follow the steps outlined in the description below. There are several steps to click through, but all are pretty intuitive, so just follow the instructions step by step and you should be fine.
http://normalian.hatenablog.com/entry/2017/09/04/233320
You can add records into the SQL Database table directly without stored procedures, by configuring the table value on the Sink Dataset rather than the Copy Activity which is what is happening.
Have a look at the below screenshot which shows the Table field within my dataset.

How can i import a son model with relation to a father model?

i've a application with 2 Google Drive Tables (FatherM & SonM models)
and with a many to one relation
i'm able to export the data in a spreadsheet
in the SonM model at export an extra column is created automaticaly by the export with the name of the relation (FatherM) and containing all the Keys of the fathers records
when i import the single model data Son spreadsheet i've an error :
V:1 Field names "FatherM" in the spreadsheet can't be found in the corresponding model.
yes it doesn't exist in the model but is created by the relation
how can i import SonM datas ?
The difference between "import single model" and "import all models and relations" modes that "import single model" doesn't import relations, instead it expects that all columns in your spreadsheet are fields of a model. You can try to use "import all models and relations" mode with a spreadsheet where is all other (except of SonM) spreadsheets tabs don't contain data.
Thanks very much for your answer
I’m now using the all models to import relation too.
Her are some comments. if you have any information to give me, don’t hesitate to tell me
My goal is to export all the models from a test environment in a spreadsheet, add row manually or modify, then later on I will import all the data on a prod environment
1) I’ve exported all the data OK
2) I’ve re imported all the data :
First remark : if one of the spread sheet tab is empty (only one line with the title column)
I’ve got an error at import : Cannot read property "length" from null. 0 records imported
I’ve deleted all the tabs that don’t have data, and now not any more this error
Second remark: I’ve another import message from google import
Value error at cell "V:6": Can't import association: "Invit - Event", because record key is not defined. 0 records imported
Event is my father model and Invit is my son model. (for One Event I’ve many Invitation)
About V:6 , in Invit tab V is the relation column to the Event. (it contains all the key that are link with the main _Key of the Event model, the name of this column is the name of the relation I’ve created)
The first 5 lines of this tab are the invitation I’ve made manually using the my appmaker application, (and each of these line have a _Key value on column A) the line 6 and after are invitation I’ve added manually (coming from another tool, theses are old invitations I need to import)
On line 6 and after the column A (_key) is empty
The cell value in V:6 is an Event(_Key) ant it exist. So I don’t understand the import error message from Google. (do you understand this message?)
Third remarks
I’ve jus done this test:
Create a new son relation using the appmaker tool
Export all the data
Re -import exactly the same data
And I’ve got this error
Drive Table internal error. Record not found. 0 associations imported
Do you know where I can find information about importing relation, on this page https://developers.google.com/appmaker/models/import-export nothing about relation
Thanks
Stéphane

Filemaker Pro 14 History tables

With a few solutions Ive worked with I've created temp table's or history tables. Normally I script it to take a handful of fields needed from a main table and copy it over to the other table by
Setting a variable then setting field to the variable for each field in the new table / new record.
I have a situation now, where Im building a history table that needs to copy the current record as is. A snapshot where all fields from that instance of the record are copied to the history table.
Rather then setting a variable then set field to the variable, Id like to get some input on a quicker way to get this done where I can do this on a record level and not type out field by field to get it done. Also if fields are added to both tables then I have to make sure my script gets updated.
Ill keep hunting around.. appreciate any help.
-Rich
Do you have a sample of copying a record from 1 table to another
including all fields and setting some fields?
As I suggested in comments, use the Import Records[] script step, and select the same file as the source. If you choose Arrange by: [ matching names ] in the Import Field Mapping dialog, it will automatically map all source fields to their similarly named counterparts.
Note that you must establish a found set in the source table before importing.
For "setting some fields", you can define auto-enter options and activate them during the import, or run Replace Field Contents[] immediately after the import.

Is it possible to create table templates in Filemaker?

I'm using Filemaker Pro 12 and I was wondering if there is a way of creating a template for tables. There are a number of fields I'm placing in my tables that are identical utility-fields like modification time-stamp, active/inactive flags, etc. I was hoping there was a way that I could define the skeleton of each table somehow instead of having to manually add these identical fields every time.
If you are using the Advanced version, you can copy&paste fields among tables/files.
Using the regular version, you can import records from your "default" table and specify [New Table...] as the target table. This will recreate the source table's structure in the target file. The source table does not have to contain any records for this to work.
To expand a little bit on michael-hor257k's answer, if you're using FileMaker Pro Advanced, a good practice is to create a "Default" table that has your core utility fields. When you want to make a new table in Manage Database, instead:
Highlight the Default table,
Copy & Paste the table, then
Rename the new table.