i have installed YugabyteDB and created local cluster using this command
./bin/yugabyted start
the database is up and running , then i create the keyspaces and tables by running the following command
cqlsh -f resources/IoTData.cql
IoTData.cql contains the following :
// Create keyspace
CREATE KEYSPACE IF NOT EXISTS TrafficKeySpace;
// Create keyspace
CREATE KEYSPACE IF NOT EXISTS TrafficKeySpace;
// Create tables
CREATE TABLE IF NOT EXISTS TrafficKeySpace.Origin_Table (vehicleId text, routeId text, vehicleType text, longitude text, latitude text, timeStamp timestamp, speed double, fuelLevel double, PRIMARY KEY ((vehicleId), timeStamp)) WITH default_time_to_live = 3600;
CREATE TABLE IF NOT EXISTS TrafficKeySpace.Total_Traffic (routeId text, vehicleType text, totalCount bigint, timeStamp timestamp, recordDate text, PRIMARY KEY (routeId, recordDate, vehicleType));
CREATE TABLE IF NOT EXISTS TrafficKeySpace.Window_Traffic (routeId text, vehicleType text, totalCount bigint, timeStamp timestamp, recordDate text, PRIMARY KEY (routeId, recordDate, vehicleType));
CREATE TABLE IF NOT EXISTS TrafficKeySpace.Poi_Traffic(vehicleid text, vehicletype text, distance bigint, timeStamp timestamp, PRIMARY KEY (vehicleid));
// Select from the tables
SELECT count(*) FROM TrafficKeySpace.Origin_Table;
SELECT count(*) FROM TrafficKeySpace.Total_Traffic;
SELECT count(*) FROM TrafficKeySpace.Window_Traffic;
SELECT count(*) FROM TrafficKeySpace.Poi_Traffic;
// Truncate the tables
TRUNCATE TABLE TrafficKeySpace.Origin_Table;
TRUNCATE TABLE TrafficKeySpace.Total_Traffic;
TRUNCATE TABLE TrafficKeySpace.Window_Traffic;
TRUNCATE TABLE TrafficKeySpace.Poi_Traffic;
The YB-Master Admin UI shows me that tables are created , but when i am using pgAdmin client to brows data from that database it doesn't shows me those tables.
in order to connect to yugabyteDB i used those properties :
database : yugabyte
user : yugabyte
password : yugabyte
host : localhost
port : 5433
why the client doesn't show tables i have created
why the client doesn't show tables i have created
The reason is that the 2 different layers can't interact with each other. YSQL data/tables cannot be read from YCQL clients and vice-versa.
This is also explained in the faq:
The YugabyteDB APIs are currently isolated and independent from one
another. Data inserted or managed by one API cannot be queried by the
other API. Additionally, Yugabyte does not provide a way to access the
data across the APIs.
Related
I am connected over TDS (1433) to a Postgres/Aurora (babelfish-enabled) database.
I can run the following three queries from my application and I receive confusing responses:
SELECT current_database()
SELECT * FROM information_schema.tables WHERE table_name = 'PERSON'
SELECT COUNT(1) FROM "PERSON"
The responses are:
current_database":"babelfish_db"
"table_catalog":"babelfish_db","table_schema":"public","table_name":"PERSON","table_type":"BASE TABLE"...}
relation "person" does not exist
I simply cannot query the PERSON table. I have tried:
"PERSON"
"person"
PERSON
person
public.PERSON
public.person
public."PERSON"
I have ensured the user I am connecting as has access to the database, schema and tables:
GRANT CONNECT ON DATABASE babelfish_db TO popweb;
GRANT USAGE ON SCHEMA public TO popweb;
GRANT SELECT, INSERT, UPDATE, DELETE ON ALL TABLES IN SCHEMA public TO popweb;
Still, I cannot access the table. I feel like such a boob/noob
For anyone who has connected to Postgres via Babelfish, what am I doing wrong?
The GA release of Babelfish didn't make any modifications to the PostgreSQL implementation of the information schema. So, what you see is the physical database of babelfish_db and the public schema. It looks like you created the table using the PostgreSQL endpoint.
To work with tables in Babelfish at this time, you need to create a T-SQL virtual database and your tables inside of that database using the T-SQL endpoint - just like you did before.
For example using SSMS and a new query connected to your Babelfish endpoint. You should notice in the SSMS database drop down and in the status bar that your context shows you are in the master database.
CREATE DATABASE ford_prefect;
GO
USE ford_prefect;
GO
CREATE SCHEMA school;
GO
CREATE TABLE [School].[Person](
[PersonID] [int] NOT NULL,
[LastName] [nvarchar](50) NOT NULL,
[FirstName] [nvarchar](50) NOT NULL,
[HireDate] [datetime] NULL,
[EnrollmentDate] [datetime] NULL,
[Discriminator] [nvarchar](50) NOT NULL,
CONSTRAINT [PK_School.Student] PRIMARY KEY CLUSTERED
(
[PersonID] ASC
)
GO
At this point you can add records via INSERT and select the table without issues.
Cheers,
Bill Ramos
Aurora PostgreSQL Babelfish PM, Amazon
I have created a table in postgres with some timestamp columns.
create table glacier_restore_progress_4(
id SERIAL NOT NULL ,
email VARCHAR(50),
restore_start timestamp,
restore_end timestamp,
primary key (id)
);
In the dbeaver client, it shows those timestamp column value as "2021-06-22 03:25:00". But when i try to fetch them via an API, those value will become "2021-06-22T03:25:00.000Z". How to get rid from it.
I tried to change the data type of the columns in dbeaver client. They didn't work
I have created a table in postgres with some timestamp columns.
create table glacier_restore_progress_4(
id SERIAL NOT NULL ,
email VARCHAR(50),
restore_start timestamp,
restore_end timestamp,
primary key (id)
);
In the dbeaver client, it shows those timestamp column value as "2021-06-22 03:25:00". But when i try to fetch them via an API, those value will become "2021-06-22T03:25:00.000Z". How to get rid from it.
I tried to change the data type of the columns in dbeaver client. They didn't work
I am having trouble thinking of a way to copy three fields out of a database into and append them to another table along with the current date. Basically what I want to do is:
DB-A: ID (N9), Name (C69), Phone (N15) {and a list of other fields I dont care about}
DB-B: Date (Todays date/time), Nane, Address, Phone (as above)
Would be great is this was a trigger in the DB on add or update of DB-A.
Greg
Quick and dirty using postgres_fdw
CREATE EXTENSION IF NOT EXISTS postgres_fdw ;
CREATE SERVER extern_server FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host 'foreignserver.co.uk', port '5432', dbname 'mydb');
CREATE USER MAPPING FOR myuser SERVER extern_server OPTIONS (user 'anotheruser');
-- Creating a foreign table based on table t1 at the server described above
CREATE FOREIGN TABLE foreign_t1 (
dba INT,
name VARCHAR(9),
phone VARCHAR(15)
)
SERVER extern_server OPTIONS (schema_name 'public', table_name 't1');
--Inserting data to a new table + date
INSERT INTO t2 SELECT dba,name,phone,CURRENT_DATE FROM foreign_t1;
-- Or just retrieving what you need placing the current date as a column
SELECT dba,name,phone,CURRENT_DATE FROM foreign_t1;
I've got a PgSQL 9.4.3 server setup and previously I was only using the public schema and for example I created a table like this:
CREATE TABLE ma_accessed_by_members_tracking (
reference bigserial NOT NULL,
ma_reference bigint NOT NULL,
membership_reference bigint NOT NULL,
date_accessed timestamp without time zone,
points_awarded bigint NOT NULL
);
Using the Windows Program PgAdmin III I can see it created the proper information and sequence.
However I've recently added another schema called "test" to the same database and created the exact same table, just like before.
However this time I see:
CREATE TABLE test.ma_accessed_by_members_tracking
(
reference bigint NOT NULL DEFAULT nextval('ma_accessed_by_members_tracking_reference_seq'::regclass),
ma_reference bigint NOT NULL,
membership_reference bigint NOT NULL,
date_accessed timestamp without time zone,
points_awarded bigint NOT NULL
);
My question / curiosity is why in a public schema the reference shows bigserial but in the test schema reference shows bigint with a nextval?
Both work as expected. I just do not understand why the difference in schema's would show different table creations. I realize that bigint and bigserial allow the same volume of ints to be used.
Merely A Notational Convenience
According to the documentation on Serial Types, smallserial, serial, and bigserial are not true data types. Rather, they are a notation to create at once both sequence and column with default value pointing to that sequence.
I created test table on schema public. The command psql \d shows bigint column type. Maybe it's PgAdmin behavior ?
Update
I checked PgAdmin source code. In function pgColumn::GetDefinition() it scans table pg_depend for auto dependency and when found it - replaces bigint with bigserial to simulate original table create code.
When you create a serial column in the standard way:
CREATE TABLE new_table (
new_id serial);
Postgres creates a sequence with commands:
CREATE SEQUENCE new_table_new_id_seq ...
ALTER SEQUENCE new_table_new_id_seq OWNED BY new_table.new_id;
From documentation: The OWNED BY option causes the sequence to be associated with a specific table column, such that if that column (or its whole table) is dropped, the sequence will be automatically dropped as well.
Standard name of a sequence is built from table name, column name and suffix _seq.
If a serial column was created in such a way, PgAdmin shows its type as serial.
If a sequence has non-standard name or is not associated with a column, PgAdmin shows nextval() as default value.