Replace Single Column Value in Single Row in Sqlite with iPhone App - iphone

I just started work with sqlite in iPhone application. Now the Question is I have 3 columns in Table. ID,Channel_Name and Channel_IP. Here is table Example.
ID | Channel_Name | Channel_IP
1 | XYZ | http://0.0.0.0/indez.sdp/playlist.m3u8
2 | ABC | http://0.0.0.0/index.sdp/playlist.m3u8
Now i Just want to update the IP address in Channel_IP Column not the whole only the IP Address in the URL Link(like update 0.0.0.0 to 1.1.1.1 only IP Address). Also Search on Google but not found any relevant solution so if any know please let know.
Thank You

USE below query
UPDATE yourtableName SET Channel_IP = replace(Channel_IP, '0.0.0.0', '1.1.1.1');

Related

Postgres: optimize query where column is in string

Lets say I have a table like so:
webpages
id | url
------------------------
1 | http://example.com/path
2 | example2.biz/another/path
3 | https://www.example3.net/another/path
And I want to search which webpages' url column is a substring of an input string. I know I can do it like this:
SELECT id FROM webpages WHERE STRPOS(url, 'example.com/path/to/some/content') > 0;
Expected result: 1
But I'm not sure how I might optimize this kind of query to run faster. Is there a way?
I found this article, it seems to suggest not - though I'm not sure if that's still true as it's from over a decade ago.
https://www.postgresql.org/message-id/046801c96b06%242cb14280%248613c780%24%40r%40sbcglobal.net

How to save the predictions of YOLO (You Only Look Once) Object detection in a jsonb field in a database

I want to run Darknet(YOLO) on a number of images and store its predictions in PostgreSQL Database.
This is the structure of my table:
sample=> \d+ prediction2;
Table "public.prediction2"
Column | Type | Modifiers | Storage | Stats target | Description
-------------+-------+-----------+----------+--------------+-------------
path | text | not null | extended | |
pred_result | jsonb | | extended | |
Indexes:
"prediction2_pkey" PRIMARY KEY, btree (path)
Darknet(YOLO)'s source files are written in C.
I have already stored Caffe's predictions in the database as follows. I have listed one of the rows of my database here as an example.
path | pred_result
-------------------------------------------------+------------------------------------------------------------------------------------------------------------------
/home/reena-mary/Pictures/predict/gTe5gy6xc.jpg | {"bow tie": 0.00631, "lab coat": 0.59257, "neck brace": 0.00428, "Windsor tie": 0.01155, "stethoscope": 0.36260}
I want to add YOLO's predictions to the jsonb data of pred_result i.e for each image path and Caffe prediction result already stored in the database, I would like to append Darknet (YOLO's) predictions.
The reason I want to do this is to add search tags to each image. So, by running Caffe and Darknet on images, I want to be able to get enough labels that can help me make my image search better.
Kindly help me with how I should do this in Darknet.
This is an issue I also encountered. Actually YOLO does not provide a JSON output interface, so there is no way to get the same output as from Caffe.
However, there is a pull request that you can merge to get workable output here: https://github.com/pjreddie/darknet/pull/34/files. It outputs CSV data, which you can convert to JSON to store in the database.
You could of course also alter the source code of YOLO to make your own implementation that outputs JSON directly.
If you are able to use a TensorFlow implementation of YOLO try this: https://github.com/thtrieu/darkflow
You can directly interact with darkflow from another python application and then do with the output data as you please (or get JSON data saved to a file, whichever is easier).

Postgres array fields: find where array contains value

Currently I have a table schema that looks like this:
| id | visitor_ids | name |
|----|-------------|----------------|
| 1 | {abc,def} | Chris Houghton |
| 2 | {ghi} | Matt Quinn |
The visitor_ids are all GUIDs, I've just shortened them for simplicity.
A user can have multiple visitor ids, hence the array type.
I have a GIN index created on the visitor_ids field.
I want to be able to lookup users by a visitor id. Currently we're doing this:
SELECT *
FROM users
WHERE visitor_ids && array['abc'];
The above works, but it's really really slow at scale - it takes around 45ms which is ~700x slower than a lookup by the primary key. (Even with the GIN index)
Surely there's got to be a more efficient way of doing this? I've looked around and wasn't able to find anything.
Possible solutions I can think of could be:
The current query is just bad and needs improving
Using a separate user_visitor_ids table
Something smart with special indexes
Help appreciated :)
I tried the second solution - 700x faster. Bingo.
I feel like this is an unsolved problem however, what's the point in adding arrays to Postgres when the performance is so bad, even with indexes?

Cassandra 1.2 CQL query set

I have a table in Cassandra containing name, item.
Using the following data types: name is text, item is set<text>.
f.e. I have these entries:
name | item
a | {item1, item3}
b | {item2, item3}
c | {item1, item2}
Now my question: Is there any way to get all names having item1?
I tried this, but didn't work:
SELECT name
FROM table
WHERE item = 'item1';
I get an error that 'item1' is a string, but item is a set<text>.
I guess there is a way to do this, but I can't think of how.
Thanks in advance.
Unlikely this is not yet supported in Cassandra. May be in some upcoming version we will be able to index even collection items.

database design decision (NoSQL) [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I'm working on an application that has the following use case:
Users upload csv files, which need to be persisted across application restarts
The data in the csv files need to be queried/sorted etc
Users specify the query-able columns in a csv file at the time of uploading the file
The currently proposed solution is:
For small files (much more common), transform the data into xml and store it either as a LOB or in the file system. For querying, slurp the whole data into memory and use something like XQuery
For larger files, create dynamic tables in the database (MySQL), with indexes on the query-able columns
Although we have prototyped this solution and it works reasonably well, it's keeping us from supporting more complex file formats such as XML and JSON. There are also a few more niggling issues with the solution that I won't go into.
Considering the schemaless nature of NoSQL databases, I though they might be used to solve this problem. I have no practical experience with NoSQL though. My questions are:
Is NoSQL well suited for this use case?
If so, which NoSQL database?
How would we store csv files in the DB (collection of key-value pairs where the column headers make up the keys and the data fields from each row make up the values?)
How would we store XML/JSON files with possibly deeply hierarchical structures?
How about querying/indexing and other performance considerations? How does that compare to something like MySQL?
Appreciate the responses and thanks in advance!
example csv file:
employee_id,name,address
1234,XXXX,abcabc
001001,YYY,xyzxyz
...
DDL statement:
CREATE TABLE `employees`(
`id` INT(6) NOT NULL AUTO_INCREMENT,
`employee_id` VARCHAR(12) NOT NULL,
`name` VARCHAR(255),
`address` TEXT,
PRIMARY KEY (`id`),
UNIQUE INDEX `EMPLOYEE_ID` (`employee_id`)
);
for each row in csv file
INSERT INTO `employees`
(`employee_id`,
`name`,
`address`)
VALUES (...);
Not really a full answer, but I think I can help on some points.
For number 2, I can at least give this link that helps sorting out NoSQL implementations.
For number 3, using a SQL database (but should fit as well for a NoSQL system), I would represent each column and each row as individual tables, and add a third table with foreign keys to columns and rows, and with the value of the cell. You get a big table with easy filtering.
For number 4, you need to "represent hierarchical data in a table"
The common approach to this would be to have a table with attributes, and a foreign key to the same table, pointing to the parent, like this for example :
+----+------------+------------+--------+
| id | attribute1 | attribute2 | parent |
+----+------------+------------+--------+
| 0 | potato | berliner | NULL |
| 1 | hello | jack | 0 |
| 2 | hello | frank | 0 |
| 3 | die | please | 1 |
| 4 | no | thanks | 1 |
| 5 | okay | man | 4 |
| 6 | no | ideas | 2 |
| 7 | last | one | 2 |
+----+------------+------------+--------+
Now the problem is that, if you want to get, say, all the child elements from element 1, you'll have to query every item individually to obtain its childs. Some other operations are hard, because they need to get a path to the object, traversing many other objects and making extra data queries.
One common workaround to this, and the one I use and prefer, is called modified pre-order tree traversal.
Using this technique, we need an extra layer between the data storage and the application, to fill some extra columns at each structure-altering modification. We will assign to each object three properties : left, right and depth.
The left and right properties will be filled counting each object from the top, traversing all the tree leaves recursively.
This is a vague approximation of the traversal algorithm for left and right (the part with depth can be easily gussed, this is just some lines to add) :
Set the tree root (or the first tree root if there are many) left
attribute to 1
Go to its first (or next) child. Set its left attribute to
the last number plus one (here, 2)
Does is it have any child ? If yes, go back to number 2. If no, set its right to the last number plus one.
Go to next child, and do the same as in 2
If no more child, go to next child of parent and do the same as in 2
Here is a picture explaining the result we get :
(source: narod.ru)
Now it is really easier to find all descendants of an object, or all of its ancestors. This can be done with only a single query, using left and right.
What is important when using this is having a good implementation of the layer between the data and the application, handling the left, right and depth attribute. These fields have to be ajusted when :
An object is deleted
An object is added
The parent field of an object is modified
This can be done with a parallel process, using locks. It can also be implemented directly between the data and the application.
See these links for more information about trees :
Managing hierarchies in SQL: MPTT/nested sets vs adjacency lists vs storing paths
MPTT With Django lib
http://www.sitepoint.com/hierarchical-data-database-2/
I personally had great results with django-nonrel and django-mptt the few times I did NoSQL.