Update sonar to 4.1.2 failed to upgrade DB - oracle10g

I'm upgrading from Sonar 3.4.1 to 4.1.2 using Oracle 10.2
I've updated all plugins according to the version they are supported in the target version.
After clicking Upgrade button in /setup page I got this error:
2014.07.08 19:15:06 INFO [DbMigration]
2014.07.08 19:15:06 INFO [DbMigration] == AddNetworkUseSubCharacteristic: migrating ============================= ====
2014.07.08 19:15:06 ERROR [o.s.s.ui.JRubyFacade] Fail to upgrade database
An error has occurred, all later migrations canceled:
ActiveRecord::JDBCError: ORA-00904: "DESCRIPTION": invalid identifier
: INSERT INTO characteristics (kee, name, rule_id, characteristic_order, enabled, parent_id, root_id, function_key, factor_value, factor_unit, offset_value, offset_unit, created_at, updated_at, quality_model_id, depth, description, id) VALUES('NETWORK_USE_EFFICIENCY', 'Network use', NULL, NULL, 1, 10347, 10347, NULL, NULL, NULL, NULL, NULL, TO_TI MESTAMP('2014-07-08 19:15:06:825000','YYYY-MM-DD HH24:MI:SS:FF6'), TO_TIMESTAMP('2014-07-08 19:15:06:825000','YYYY-M M-DD HH24:MI:SS:FF6'), NULL, NULL, NULL, ?)
/opt/sonarqube-4.1.2/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/connection_adapters/abstrac t_adapter.rb:227:in `log'
/opt/sonarqube-4.1.2/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/connection_adapters/abstrac t_adapter.rb:212:in `log'
/opt/sonarqube-4.1.2/web/WEB-INF/gems/gems/activerecord-jdbc-adapter-1.1.3/lib/arjdbc/oracle/adapter.rb:183: in `ora_insert'
/opt/sonarqube-4.1.2/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/connection_adapters/abstrac t/query_cache.rb:26:in `insert_with_query_dirty'
/opt/sonarqube-4.1.2/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/base.rb:2967:in `create'
/opt/sonarqube-4.1.2/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/timestamp.rb:53:in `create_ with_timestamps'
/opt/sonarqube-4.1.2/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/callbacks.rb:266:in `create _with_callbacks'
/opt/sonarqube-4.1.2/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/base.rb:2933:in `create_or_ update'
/opt/sonarqube-4.1.2/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/callbacks.rb:250:in `create _or_update_with_callbacks'
/opt/sonarqube-4.1.2/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/base.rb:2583:in `save'
/opt/sonarqube-4.1.2/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/validations.rb:1089:in `sav e_with_validation'
/opt/sonarqube-4.1.2/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/dirty.rb:79:in `save_with_d irty'
org/jruby/RubyKernel.java:2225:in `send'
/opt/sonarqube-4.1.2/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/transactions.rb:229:in `wit h_transaction_returning_status'
/opt/sonarqube-4.1.2/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/connection_adapters/abstrac t/database_statements.rb:136:in `transaction'
/opt/sonarqube-4.1.2/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/transactions.rb:182:in `tra nsaction'
/opt/sonarqube-4.1.2/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/transactions.rb:228:in `wit h_transaction_returning_status'
/opt/sonarqube-4.1.2/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/transactions.rb:196:in `sav e_with_transactions'
/opt/sonarqube-4.1.2/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/transactions.rb:208:in `rol lback_active_record_state!'
/opt/sonarqube-4.1.2/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/transactions.rb:196:in `sav e_with_transactions'
/opt/sonarqube-4.1.2/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/base.rb:727:in `create'
/opt/sonarqube-4.1.2/web/WEB-INF/db/migrate/466_add_network_use_sub_characteristic.rb:42:in `up'
org/jruby/RubyKernel.java:2221:in `send'
/opt/sonarqube-4.1.2/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/migration.rb:282:in `migrat e'
jar:file:/opt/sonarqube-4.1.2/web/WEB-INF/lib/jruby-complete-1.7.6.jar!/META-INF/jruby.home/lib/ruby/1.8/ben chmark.rb:293:in `measure'
/opt/sonarqube-4.1.2/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/migration.rb:282:in `migrat e'
org/jruby/RubyKernel.java:2225:in `send'
/opt/sonarqube-4.1.2/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/migration.rb:365:in `migrat e'
/opt/sonarqube-4.1.2/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/migration.rb:491:in `migrat e'
org/jruby/RubyProc.java:290:in `call'
org/jruby/RubyProc.java:224:in `call'
/opt/sonarqube-4.1.2/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/migration.rb:567:in `ddl_tr ansaction'
/opt/sonarqube-4.1.2/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/migration.rb:490:in `migrat e'
org/jruby/RubyArray.java:1613:in `each'
/opt/sonarqube-4.1.2/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/migration.rb:477:in `migrat e'
/opt/sonarqube-4.1.2/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/migration.rb:401:in `up'
/opt/sonarqube-4.1.2/web/WEB-INF/gems/gems/activerecord-2.3.15/lib/active_record/migration.rb:383:in `migrat e'
/opt/sonarqube-4.1.2/web/WEB-INF/lib/database_version.rb:62:in `upgrade_and_start'
/opt/sonarqube-4.1.2/web/WEB-INF/app/models/database_migration_manager.rb:109:in `start_migration'
org/jruby/RubyProc.java:290:in `call'
org/jruby/RubyProc.java:228:in `call'
Not sure why this error related to invalid identifier came up.
Thanks in advance for your help.
CHARACTERISTICS table structure:
{
Name Null? Type
ID NOT NULL NUMBER(38)
KEE VARCHAR2(100)
NAME VARCHAR2(100)
RULE_ID NUMBER(38)
CHARACTERISTIC_ORDER NUMBER(38)
ENABLED NUMBER(1)
PARENT_ID NUMBER(38)
ROOT_ID NUMBER(38)
FUNCTION_KEY VARCHAR2(100)
FACTOR_VALUE NUMBER(30,20)
FACTOR_UNIT VARCHAR2(100)
OFFSET_VALUE NUMBER(30,20)
OFFSET_UNIT VARCHAR2(100)
CREATED_AT TIMESTAMP(6)
UPDATED_AT TIMESTAMP(6)
}

Related

Getting reindexing issue after 2.4.1 to 2.4.5 commerce upgrade

$ bin/magento indexer:reindex catalog_product_flat
Product Flat Data index process error during indexation process:
SQLSTATE[42S22]: Column not found: 1054 Unknown column 'row_id' in 'field list', query was: INSERT INTO catalog_product_entity_tmp_indexer (row_id, entity_id, type_id, attribute_set_id, created_at, has_options, required_options, sku, updated_at) SELECT e.row_id, e.entity_id, e.type_id, e.attribute_set_id, e.created_at, e.has_options, e.required_options, e.sku, e.updated_at FROM catalog_product_entity AS e WHERE (e.created_in <= '1655131200') AND (e.updated_in > '1655131200')
In the database, I don't have a table. Tries Multiple solutions not working.

Query called in typescript/javascript returns syntax error at "timestamp $3", same query in PSQL runs without a problem

When I do the following query, everything completes fine in psql:
-- psql
INSERT INTO public.contest (contest_id, period_id, start_ts, end_ts, contest_name, default_format, status )
VALUES ('VKVPA', '2019/01', timestamp '2019-01-20 08:00',
timestamp '2019-01-20 11:00', 'description', 'EDI', 'NEW' ) RETURNING contest_key;
-- console output:
contest_key |
------------+
17 |
(start_ts and end_ts have type TIMESTAMP WITHOUT TIME ZONE)
When I do the same in a program, it ends with syntax error:
// contest-debug.ts
import { Pool } from 'pg' ;
let pool = new Pool( {user: 'contest_owner', database: 'contest'} );
pool.query(
"INSERT INTO public.contest (contest_id, period_id, start_ts, end_ts, contest_name, default_format, status ) "
+ "VALUES ($1, $2, timestamp $3, timestamp $4, $5, $6, 'NEW' ) RETURNING contest_key",
['VKVPA', '2019/02', '2019-01-20 08:00', '2019-01-20 11:00', 'VKV Provozni aktiv 2019/01', 'EDI']
)
.then( result => {
console.log(`New contest has number ${result.rows[0].contest_key}`);
})
.catch( reason => { console.log( 'Contest creation failed:', reason )});
Console output:
Contest creation failed: { error: syntax error at or near "$3"
at Connection.parseE (D:\dev\cav\log2any\node_modules\pg\lib\connection.js:601:11)
at Connection.parseMessage (D:\dev\cav\log2any\node_modules\pg\lib\connection.js:398:19)
at Socket.<anonymous> (D:\dev\cav\log2any\node_modules\pg\lib\connection.js:120:22)
at Socket.emit (events.js:193:13)
at addChunk (_stream_readable.js:296:12)
at readableAddChunk (_stream_readable.js:277:11)
at Socket.Readable.push (_stream_readable.js:232:10)
at TCP.onStreamRead (internal/stream_base_commons.js:150:17)
name: 'error',
length: 92,
severity: 'ERROR',
code: '42601',
detail: undefined,
hint: undefined,
position: '135',
internalPosition: undefined,
internalQuery: undefined,
where: undefined,
schema: undefined,
table: undefined,
column: undefined,
dataType: undefined,
constraint: undefined,
file: 'scan.l',
line: '1134',
routine: 'scanner_yyerror' }
When I try stepping through functions in pg module, I can see the same correct values all through, so why does the same thing cause SQL syntax error in javascript if the SQL itself is fine?
Unfortunately I do not know where to look for the final SQL text that the pg module creates.
What puzzles me even more is that yesterday the same program worked, today it does not. I did not make any changes to the program itself, tsc transpiler or pg modules.
UPDATE
The following code works. I did not pass the timestamp strings as parameters
and instead included them directly in the text of the query. Apparently, this is a bug either in libpq or in the javascript module.
pool.query(
"INSERT INTO public.contest (contest_id, period_id, start_ts, end_ts, contest_name, default_format, status ) "
+ "VALUES ($1, $2, timestamp '2019-02-17 08:00', timestamp '2019-02-17 11:00', $3, $4, 'NEW' ) RETURNING contest_key",
['VKVPA', '2019/02', 'VKV Provozni aktiv 2019/02', 'EDI']
)
...
Apparently, this is a bug in postgres documentation. This syntax cannot be used in expressions to be assigned to columns, even when the string is a literal constant, but this is not explicitly mentioned in the manual (instead, it mentions that other type cast expressions can also be used for runtime conversions). I would recommend authors of the PostgreSQL manual to be a little bit more explicit in this detail.
Solution: use CAST( $3 AS TIMESTAMP ) or postgresql-specific syntax $3::timestamp.
pool.query(
"INSERT INTO public.contest (contest_id, period_id, start_ts, end_ts, contest_name, default_format, status ) "
+ "VALUES ($1, $2, CAST ($3 AS TIMESTAMP), CAST($4 AS TIMESTAMP), $5, $6, 'NEW' ) RETURNING contest_key",
['VKVPA', '2019/02', '2019-01-20 08:00', '2019-01-20 11:00', 'VKV Provozni aktiv 2019/01', 'EDI']
)
.then( result => {
console.log(`New contest has number ${result.rows[0].contest_key}`);
})
.catch( reason => { console.log( 'Contest creation failed:', reason )});

T-SQL Looking for select on Time FROM and TO based on VARCHAR (version) selection

Looking for solution on my problem.
Have table that in my example contain only 2 columns.
Column varVersion show version number of our Application.
Column dateLoginTime show when customer last time login to the application.
My first thought was to just create a Max date, order by date and group by varVersion. This seems to work just fine until users start using old application.
On my example you can see that user use version 2.1.3 and than move back to 1.1.8. With max time it look like he is using this version for 6 days. When I looked on the data he use this version only for 5 minutes (mistake).
Max:
varVersion | dateLoginTime
2.1.4 | 2018-03-13 11:31:26.893
1.1.8 | 2018-03-07 16:40:21.060
2.1.3 | 2018-02-28 12:26:52.760
2.1.2 | 2018-02-15 12:35:42.707
1.1.6 | 2018-01-23 15:01:46.410
I'm looking to create new field and see when the app was used from to. But failing to get correct results. Tried min/max/over but still wrong.
Min/Max result:
varVersion |FROM |TO
2.1.4 |2018-02-28 22:45:48.687 |2018-03-13 11:31:26.893
2.1.3 |2018-02-26 12:16:41.907 |2018-02-28 12:26:52.760
2.1.2 |2018-02-14 19:56:11.837 |2018-02-15 12:35:42.707
1.1.8 |2018-01-24 12:19:06.933 |2018-03-07 16:40:21.060
1.1.6 |2018-01-08 16:54:46.780 |2018-01-23 15:01:46.410
Expected Result
version |FROM |TO
2.1.4 |2018-03-07 16:45:10.207 |2018-03-13 11:31:26.893
1.1.8 |2018-03-07 16:40:21.060 |2018-03-07 16:45:10.207
2.1.4 |2018-02-28 22:45:48.687 |2018-03-07 16:40:21.060
2.1.3 |2018-02-26 12:16:41.907 |2018-02-28 22:45:48.687
2.1.2 |2018-02-14 19:56:11.837 |2018-02-26 12:16:41.907
1.1.8 |2018-01-24 12:19:06.933 |2018-02-14 19:56:11.837
1.1.6 |2018-01-08 16:54:46.780 |2018-01-24 12:19:06.933
Anyone have some ideas?
Thanks in advance
Petr
DATA:
--POPULATE DATA FOR TEST
drop table #temp
create table #temp
(varVersion VARCHAR(100),
dateLoginTime DATETIME)
INSERT INTO #temp (varVersion, dateLoginTime)
values
('2.1.4','2018-03-13 11:31:26.893'),
('2.1.4','2018-03-12 11:22:12.650'),
('2.1.4','2018-03-08 08:40:18.133'),
('2.1.4','2018-03-07 16:45:10.207'),
('1.1.8','2018-03-07 16:40:21.060'),
('2.1.4','2018-03-07 12:28:08.823'),
('2.1.4','2018-03-02 12:21:58.583'),
('2.1.4','2018-03-01 12:20:17.163'),
('2.1.4','2018-02-28 22:49:42.320'),
('2.1.4','2018-02-28 22:45:48.687'),
('2.1.3','2018-02-28 12:26:52.760'),
('2.1.3','2018-02-27 12:21:50.887'),
('2.1.3','2018-02-26 12:16:41.907'),
('2.1.2','2018-02-15 12:35:42.707'),
('2.1.2','2018-02-14 19:56:11.837'),
('1.1.8','2018-02-14 12:39:50.603'),
('1.1.8','2018-02-02 12:34:08.393'),
('1.1.8','2018-01-25 12:18:19.790'),
('1.1.8','2018-01-24 12:19:06.933'),
('1.1.6','2018-01-23 15:01:46.410'),
('1.1.6','2018-01-22 12:12:18.510'),
('1.1.6','2018-01-08 16:54:46.780')
--ORIGINAL STATEMENT
SELECT DISTINCT TOP 10
varVersion ,
MAX(dateLoginTime) dateLoginTime--, MAX(dateLoginTime)--, MAX(login_time)
FROM #temp
GROUP BY varVersion
ORDER BY 2 DESC
--NEW STATEMENT
SELECT DISTINCT TOP 10
varVersion ,
MIN(dateLoginTime) 'FROM', MAX(dateLoginTime) 'TO'
FROM #temp
GROUP BY varVersion
ORDER BY 2 DESC
select * from #temp
This should work. Use lag to get the start and then just search above.
declare #T table (ver VARCHAR(10), dt DATETIME);
INSERT INTO #T (ver, dt)
values
('2.1.4','2018-03-13 11:31:26.893'),
('2.1.4','2018-03-12 11:22:12.650'),
('2.1.4','2018-03-08 08:40:18.133'),
('2.1.4','2018-03-07 16:45:10.207'),
('1.1.8','2018-03-07 16:40:21.060'),
('2.1.4','2018-03-07 12:28:08.823'),
('2.1.4','2018-03-02 12:21:58.583'),
('2.1.4','2018-03-01 12:20:17.163'),
('2.1.4','2018-02-28 22:49:42.320'),
('2.1.4','2018-02-28 22:45:48.687'),
('2.1.3','2018-02-28 12:26:52.760'),
('2.1.3','2018-02-27 12:21:50.887'),
('2.1.3','2018-02-26 12:16:41.907'),
('2.1.2','2018-02-15 12:35:42.707'),
('2.1.2','2018-02-14 19:56:11.837'),
('1.1.8','2018-02-14 12:39:50.603'),
('1.1.8','2018-02-02 12:34:08.393'),
('1.1.8','2018-01-25 12:18:19.790'),
('1.1.8','2018-01-24 12:19:06.933'),
('1.1.6','2018-01-23 15:01:46.410'),
('1.1.6','2018-01-22 12:12:18.510'),
('1.1.6','2018-01-08 16:54:46.780');
select tt.ver, tt.dt as frm
, isnull((select min(td.dt) from #T td where td.ver <> tt.ver and td.dt > tt.dt), (select max(dt) from #T)) as too
from ( select t.ver, t.dt, lag(t.ver) over (order by t.dt asc) as lagVer
from #T t
) tt
where tt.ver <> tt.lagVer or tt.lagVer is null
order
by tt.dt desc;
ver frm too
---------- ----------------------- -----------------------
2.1.4 2018-03-07 16:45:10.207 2018-03-13 11:31:26.893
1.1.8 2018-03-07 16:40:21.060 2018-03-07 16:45:10.207
2.1.4 2018-02-28 22:45:48.687 2018-03-07 16:40:21.060
2.1.3 2018-02-26 12:16:41.907 2018-02-28 22:45:48.687
2.1.2 2018-02-14 19:56:11.837 2018-02-26 12:16:41.907
1.1.8 2018-01-24 12:19:06.933 2018-02-14 19:56:11.837
1.1.6 2018-01-08 16:54:46.780 2018-01-24 12:19:06.933

queryid column missing in pg_stat_statements table

We have a heroku postgres database that is running on version 9.6.1
When upgrading our pghero installation to the newest version 2.0.2 we're getting failures that pghero isn't able to find the queryid column in the pg_stat_statements table.
The pg_stat_statements extension is installed.
2017-08-10T09:54:44.002042+00:00 app[web.1]: Completed 500 Internal Server Error in 274ms (ActiveRecord: 186.1ms)
2017-08-10T09:54:44.004012+00:00 app[web.1]:
2017-08-10T09:54:44.004048+00:00 app[web.1]: ActiveRecord::StatementInvalid (PG::UndefinedColumn: ERROR: column "queryid" does not exist
2017-08-10T09:54:44.004050+00:00 app[web.1]: LINE 1: ...ry_stats AS ( SELECT LEFT(query, 10000) AS query, queryid AS...
2017-08-10T09:54:44.004051+00:00 app[web.1]: ^
2017-08-10T09:54:44.004052+00:00 app[web.1]: HINT: Perhaps you meant to reference the column "pg_stat_statements.query".
2017-08-10T09:54:44.004055+00:00 app[web.1]: : WITH query_stats AS ( SELECT LEFT(query, 10000) AS query, queryid AS query_hash, rolname AS user, (total_time / 1000 / 60) AS total_minutes, (total_time / calls) AS average_time, calls FROM pg_stat_statements INNER JOIN pg_database ON pg_database.oid = pg_stat_statements.dbid INNER JOIN pg_roles ON pg_roles.oid = pg_stat_statements.userid WHERE pg_database.datname = current_database() ) SELECT query, query_hash, query_stats.user, total_minutes, average_time, calls, total_minutes * 100.0 / (SELECT SUM(total_minutes) FROM query_stats) AS total_percent, (SELECT SUM(total_minutes) FROM query_stats) AS all_queries_total_minutes FROM query_stats ORDER BY "total_minutes" DESC LIMIT 100):
It turns out that upgrading the postgres version on heroku does not necessarily update extensions to the most current version.
Updating the extension by running
ALTER EXTENSION pg_stat_statements UPDATE;
fixed the problem.

Error report - SQL Error: ORA-01843: not a valid month 01843

I am using sql developer for the first time. I cant understand why this error is occuring.
CREATE TABLE TOY_STORE
( TOY_STORE_ID NUMBER(3,0),
TOY_STORE_NAME VARCHAR2(30 BYTE) NOT NULL ENABLE,
CITY VARCHAR2(30 BYTE) DEFAULT 'DELHI',
PHONENUMBER" NUMBER(10,0) NOT NULL ENABLE,
STORE_OPENING_TIME TIMESTAMP (6),
STORE_CLOSING_TIME TIMESTAMP (6),
CHECK (EXTRACT(HOUR FROM CAST (TO_CHAR (STORE_OPENING_TIME, 'YYYY-MON-DD HH24:MI:SS') AS TIMESTAMP)) > 8 || NULL),
CHECK (EXTRACT(HOUR FROM CAST (TO_CHAR (STORE_CLOSING_TIME, 'YYYY-MON-DD HH24:MI:SS') AS TIMESTAMP)) < 21 || NULL);
INSERT INTO TOY_STORE
VALUES(1, 'Kid''s Cave', 'Delhi', 9912312312, '2014-04-01 09:10:12', '2014-04-01 21:42:05');
Following was the error given:
Error report - SQL Error: ORA-01843: not a valid month 01843. 00000 - "not a valid month" *Cause: *Action: Error starting at line : 1 in command - INSERT INTO TOY_STORE VALUES(1, 'Kid''s Cave', 'Delhi', 9912312312, '04-2014-04 09:10:12', '04-2014-04 21:42:05') Error report - SQL Error: ORA-01843: not a valid month 01843. 00000 - "not a valid month"
Your create table as shown in the question has a stray double-quote and is missing a closing parenthesis. Your check constraints are odd:
you are converting a timestamp to a string using a specific format, and then casting back to a timestamp using the session NLS_TIMESTAMP_FORMAT, which will fail for sessions which don't have the setting you expect;
your are concatenating a null onto the hour value you're checking for, e.g. 21 || NULL, which is converting it to a string. Pretty sure you want to allow that to be null so you're using || as or, which isn't correct; but you don't need to explicitly allow for that anyway.
CREATE TABLE TOY_STORE (
TOY_STORE_ID NUMBER(3,0),
TOY_STORE_NAME VARCHAR2(30 BYTE) NOT NULL ENABLE,
CITY VARCHAR2(30 BYTE) DEFAULT 'DELHI',
PHONENUMBER NUMBER(10,0) NOT NULL ENABLE,
STORE_OPENING_TIME TIMESTAMP (6),
STORE_CLOSING_TIME TIMESTAMP (6),
CHECK (EXTRACT(HOUR FROM STORE_OPENING_TIME) > 8),
CHECK (EXTRACT(HOUR FROM STORE_CLOSING_TIME) < 21)
);
You might want to consider naming your constraints; and you might want a lower bound on the closing time and an upper bound on the opening time, or at least make sure opening isn't after closing (unless you're allowing for night opening; in which case you probably wouldn't want those constraints at all).
Then you are inserting using a string literal, not a timestamp. You are again relying on implicit conversion using your client's NLS_TIMESTAMP_FORMAT setting. The SQL Developer default for that is DD-MON-RR HH24.MI.SSXFF, which means with `'2014-04-01 09:10:12`` it will try to map the parts of the string literal to the parts of the format mask and fail; that default model does give ORA-01843.
You should either convert your string with an explicit format mask, using to_timestamp():
INSERT INTO TOY_STORE (TOY_STORE_ID, TOY_STORE_NAME, CITY, PHONENUMBER,
STORE_OPENING_TIME, STORE_CLOSING_TIME)
VALUES(1, 'Kid''s Cave', 'Delhi', 9912312312,
to_timestamp('2014-04-01 09:10:12', 'YYYY-MM-DD HH24:MI:SS'),
to_timestamp('2014-04-01 21:42:05', 'YYYY-MM-DD HH24:MI:SS'));
or use ANSI timestamp literals:
INSERT INTO TOY_STORE (TOY_STORE_ID, TOY_STORE_NAME, CITY, PHONENUMBER,
STORE_OPENING_TIME, STORE_CLOSING_TIME)
VALUES(1, 'Kid''s Cave', 'Delhi', 9912312312,
timestamp '2014-04-01 09:10:12', timestamp '2014-04-01 21:42:05');
... which violates your constraint as expected:
SQL Error: ORA-02290: check constraint (SCHEMA.SYS_C00113492) violated
02290. 00000 - "check constraint (%s.%s) violated"
*Cause: The values being inserted do not satisfy the named check
*Action: do not insert values that violate the constraint.
Notice the SYS_C00113492 name; that's a system-generated name for your constraint. It will be easier to follow what is happening if you name your constraints.
You are still allowed to insert nulls:
INSERT INTO TOY_STORE (TOY_STORE_ID, TOY_STORE_NAME, CITY, PHONENUMBER,
STORE_OPENING_TIME, STORE_CLOSING_TIME)
VALUES(1, 'Kid''s Cave', 'Delhi', 9912312312, null, null);
1 row inserted.