Inserting rows through SQirreL SQL Client for Apache Phoenix - apache-phoenix

Using:
SQuirreL SQL Client Version 3.7
Phoenix Thin Drive
Database us_population is already created by running the following query in SQuirreL SQL Client
CREATE TABLE IF NOT EXISTS us_population (
state CHAR(2) NOT NULL,
city VARCHAR NOT NULL,
population BIGINT
CONSTRAINT my_pk PRIMARY KEY (state, city));
When I tried inserting 1 row
UPSERT INTO US_POPULATION (STATE,CITY,POPULATION) VALUES ('NY','New York',8143197);
The following message is outputed
Query 1 of 1, Rows read: 0, Elapsed time (seconds) - Total: 0.106, SQL query: 0.106, Reading results: 0
When I tried to retrieve all the records, there are zero rows found.
SELECT * FROM US_POPULATION
Result
Rows: 0
How can I insert 1 row?
I am new to SQuirreL SQL and Apache Phoenix by the way.

Related

How to correctly GROUP BY on jdbc sources

I have a Kafka stream with user_id and want to produce another stream with user_id and number of records in a JDBC table.
Following is how I tried to achieve this (I'm new to flink, so please correct me if that's not how things are supposed to be done). The issue is that flink ignores all updates to JDBC table after the job has started.
As far as I understand the answer to this is to use lookup joins but flink complains that lookup joins are not supported on temporal views. Also tried to use versioned views without much success.
What would be the correct approach to achieve what I want?
CREATE TABLE kafka_stream (
user_id STRING,
event_time TIMESTAMP(3) METADATA FROM 'timestamp',
WATERMARK FOR event_time AS event_time - INTERVAL '5' SECOND
) WITH (
'connector' = 'kafka',
-- ...
)
-- NEXT SQL --
CREATE TABLE jdbc_table (
user_id STRING,
checked_at TIMESTAMP,
PRIMARY KEY(user_id) NOT ENFORCED
) WITH (
'connector' = 'jdbc',
-- ...
)
-- NEXT SQL --
CREATE TEMPORARY VIEW checks_counts AS
SELECT user_id, count(*) as num_checks
FROM jdbc_table
GROUP BY user_id
-- NEXT SQL --
INSERT INTO output_kafka_stream
SELECT
kafka_stream.user_id,
checks_counts.num_checks
FROM kafka_stream
LEFT JOIN checks_counts ON kafka_stream.user_id = checks_counts.user_id

Multiple series with one query in Grafana using PostgresQL as datasource

I have data in a Postgres table with roughly this form:
CREATE TABLE jobs
(
id BIGINT PRIMARY KEY,
started_at TIMESTAMPTZ,
duration NUMERIC,
project_id BIGINT
)
I also came up with a query that is kinda what I want:
SELECT
$__timeGroupAlias(started_at,$__interval),
avg(duration) AS "durations"
FROM jobs
WHERE
project_id = 720
GROUP BY 1
ORDER BY 1
This query filters for one exact project_id. What I actually want is one line in the chart for each project that has an entry in the table, not for just one.
I fail to find a way to do that. I tried all different flavors of group by clauses I could think of, and also tried the examples I found online but none of them worked.
Try this Grafana PostgreSQL query:
SELECT
$__timeGroupAlias(started_at, $__interval),
project_id::text AS "metric",
AVG(duration) AS "durations"
FROM jobs
WHERE $__timeFilter(started_at)
GROUP BY 1,2
ORDER BY 1

PostgreSQL 12.3: ERROR: out of memory for query result

I have an AWS RDS PostgreSQL 12.3 (t3.small, 2CPU 2GB RAM). I have this table:
CREATE TABLE public.phones_infos
(
phone_id integer NOT NULL DEFAULT nextval('phones_infos_phone_id_seq'::regclass),
phone character varying(50) COLLATE pg_catalog."default" NOT NULL,
company_id integer,
phone_tested boolean DEFAULT false,
imported_at timestamp with time zone NOT NULL,
CONSTRAINT phones_infos_pkey PRIMARY KEY (phone_id),
CONSTRAINT fk_phones_infos FOREIGN KEY (company_id)
REFERENCES public.companies_infos (id) MATCH SIMPLE
ON UPDATE NO ACTION
ON DELETE CASCADE
)
There are exactly 137468 records in this table, using:
SELECT count(1) FROM phones_infos;
The ERROR: out of memory for query result occurs with this simple query when I use pgAdmin 4.6:
SELECT * FROM phones_infos;
I have tables with 5M+ records and never had this problem before.
EXPLAIN SELECT * FROM phones_infos;
Seq Scan on phones_infos (cost=0.00..2546.68 rows=137468 width=33)
I read this article to see if I could find answers, but unfortunately as we can see on metrics: there are no old pending connections that could eat memory.
As suggested, the shared_buffers seems to be correctly sized:
SHOW shared_buffers;
449920kB
What should I try?
The problem must be on the client side. A sequential scan does not require much memory in PostgreSQL.
pgAdmin will cache the complete result set in RAM, which probably explains the out-of-memory condition.
I see two options:
Limit the number of result rows in pgAdmin:
SELECT * FROM phones_infos LIMIT 1000;
Use a different client, for example psql. There you can avoid the problem by setting
\set FETCH_COUNT 1000
so that the result set is fetched in batches.

Postgres SubQuery Limit For Update not respecting Limit

I recently upgraded my postgres db from 9.5.4 to 10.7 and noticed some odd behavior with an existing query.
The trimmed down version looks like this:
update
mytable
set
job_id = 6
where
id in (
select * from
(
select id from
mytable
where job_id is null
limit 2
) x for update)
and job_id is null
I would expect the number of rows to be updated to equal 2, but instead it is updating all the records that match the subquery without the limit. If I remove the for update statement, or the matching job_id is null statement, the records updated does equal 2 as expected. Before we updated this query would update the correct number of rows.
Did some behavior in 10.x change?

The query result changes in remote and local database

The below table is created in local database and remote databases.
CREATE TABLE EMPLOYEE1 ( EMP_ID INTEGER, EMP_NAME VARCHAR(10), EMP_DEPT VARCHAR(10) );
Insert the below rows in tables created in both the databases.
INSERT INTO EMPLOYEE1 (EMP_ID, EMP_NAME,EMP_DEPT)
VALUES (1,'A','IT'), (2,'B','IT'), (3,'C','SALES'), (4,'D','SALES'), (5,'E','ACCOUNTS'), (6,'F','ACCOUNTS'), (7,'G','HR'), (8,'H','HR');
COMMIT;
If i run the below query in local database of my system then the query result is correct.i.e it is returning all the rows in the table as the query exactly has to do. But the same query if i run in remote database then only 4 rows are returned,which is a wrong result.
SELECT * FROM EMPLOYEE1 WHERE (EMP_DEPT NOT IN ('IT','SALES') OR EMP_DEPT IN ('IT','SALES'));
Can anyone suggest why the query behavior changes?
As per your query, you want to select all the records. Then simply you can use the following
SELECT * FROM EMPLOYEE1
What is the purpose of this condition?
WHERE (EMP_DEPT NOT IN ('IT','SALES') OR EMP_DEPT IN ('IT','SALES'))