All tables of my websites have a common field named 'deleted_by'. For Database query I use eloquent. I want when I write a query (like Category::all();),I want it will automatically like Category::where('deleted_by',null)->all()/get(); I want to mean that all query automatically will be 'where('deleted_by',null)'. Example:- categories(table name) have two row, see below.
|name | deleted_by |....
| --- | -------- |....
|cat1 | null |....
| --- | -------- |....
|cat2 | yes |....
| --- | -------- |....
when I wirte a query 'Category::all()' it return first row because (It is deleted_by = null).
Have any solution??
Thanks.
Have any solution??
Thanks.
Related
Googling for a definition either returns results for a column oriented DB or gives very vague definitions.
My understanding is that wide column stores consist of column families which consist of rows and columns. Each row within said family is stored together on disk. This sounds like how row oriented databases store their data. Which brings me to my first question:
How are wide column stores different from a regular relational DB table? This is the way I see it:
* column family -> table
* column family column -> table column
* column family row -> table row
This image from Database Internals simply looks like two regular tables:
The guess I have as to what is different comes from the fact that "multi-dimensional map" is mentioned along side wide column stores. So here is my second question:
Are wide column stores sorted from left to right? Meaning, in the above example, are the rows sorted first by Row Key, then by Timestamp, and finally by Qualifier?
Let's start with the definition of a wide column database.
Its architecture uses (a) persistent, sparse matrix, multi-dimensional
mapping (row-value, column-value, and timestamp) in a tabular format
meant for massive scalability (over and above the petabyte scale).
A relational database is designed to maintain the relationship between the entity and the columns that describe the entity. A good example is a Customer table. The columns hold values describing the Customer's name, address, and contact information. All of this information is the same for each and every customer.
A wide column database is one type of NoSQL database.
Maybe this is a better image of four wide column databases.
My understanding is that the first image at the top, the Column model, is what we called an entity/attribute/value table. It's an attribute/value table within a particular entity (column).
For Customer information, the first wide-area database example might look like this.
Customer ID Attribute Value
----------- --------- ---------------
100001 name John Smith
100001 address 1 10 Victory Lane
100001 address 3 Pittsburgh, PA 15120
Yes, we could have modeled this for a relational database. The power of the attribute/value table comes with the more unusual attributes.
Customer ID Attribute Value
----------- --------- ---------------
100001 fav color blue
100001 fav shirt golf shirt
Any attribute that a marketer can dream up can be captured and stored in an attribute/value table. Different customers can have different attributes.
The Super Column model keeps the same information in a different format.
Customer ID: 100001
Attribute Value
--------- --------------
fav color blue
fav shirt golf shirt
You can have as many Super Column models as you have entities. They can be in separate NoSQL tables or put together as a Super Column family.
The Column Family and Super Column family simply gives a row id to the first two models in the picture for quicker retrieval of information.
Most (if not all) Wide-column stores are indeed row-oriented stores in that every parts of a record are stored together. You can see that as a 2-dimensional key-value store. The first part of the key is used to distribute the data across servers, the second part of the key lets you quickly find the data on the target server.
Wide-column stores will have different features and behaviors. However, Apache Cassandra, for example, allows you to define how the data will be sorted. Take this table for example:
| id | country | timestamp | message |
|----+---------+------------+---------|
| 1 | US | 2020-10-01 | "a..." |
| 1 | JP | 2020-11-01 | "b..." |
| 1 | US | 2020-09-01 | "c..." |
| 2 | CA | 2020-10-01 | "d..." |
| 2 | CA | 2019-10-01 | "e..." |
| 2 | CA | 2020-11-01 | "f..." |
| 3 | GB | 2020-09-01 | "g..." |
| 3 | GB | 2020-09-02 | "h..." |
|----+---------+------------+---------|
If your partitioning key is (id) and your clustering key is (country, timestamp), the data will be stored like this:
[Key 1]
1:JP,2020-11-01,"b..." | 1:US,2020-09-01,"c..." | 1:US,2020-10-01,"a..."
[Key2]
2:CA,2019-10-01,"e..." | 2:CA,2020-10-01,"d..." | 2:CA,2020-11-01,"f..."
[Key3]
3:GB,2020-09-01,"g..." | 3:GB,2020-09-02,"h..."
Or in table form:
| id | country | timestamp | message |
|----+---------+------------+---------|
| 1 | JP | 2020-11-01 | "b..." |
| 1 | US | 2020-09-01 | "c..." |
| 1 | US | 2020-10-01 | "a..." |
| 2 | CA | 2019-10-01 | "e..." |
| 2 | CA | 2020-10-01 | "d..." |
| 2 | CA | 2020-11-01 | "f..." |
| 3 | GB | 2020-09-01 | "g..." |
| 3 | GB | 2020-09-02 | "h..." |
|----+---------+------------+---------|
If you change the primary key (composite of partitioning and clustering key) to (id, timestamp) WITH CLUSTERING ORDER BY (timestamp DESC) (id is the partitioning key, timestamp is the clustering key in descending order), the result would be:
[Key 1]
1:US,2020-09-01,"c..." | 1:US,2020-10-01,"a..." | 1:JP,2020-11-01,"b..."
[Key2]
2:CA,2019-10-01,"e..." | 2:CA,2020-10-01,"d..." | 2:CA,2020-11-01,"f..."
[Key3]
3:GB,2020-09-01,"g..." | 3:GB,2020-09-02,"h..."
Or in table form:
| id | country | timestamp | message |
|----+---------+------------+---------|
| 1 | US | 2020-09-01 | "c..." |
| 1 | US | 2020-10-01 | "a..." |
| 1 | JP | 2020-11-01 | "b..." |
| 2 | CA | 2019-10-01 | "e..." |
| 2 | CA | 2020-10-01 | "d..." |
| 2 | CA | 2020-11-01 | "f..." |
| 3 | GB | 2020-09-01 | "g..." |
| 3 | GB | 2020-09-02 | "h..." |
|----+---------+------------+---------|
I suspect this question is already well-answered but perhaps due to limited SQL vocabulary I have not managed to find what I need. I have a database with many code:description mappings in a single 'parameter' table. I would like to define a query or procedure to return the descriptions for all (or an arbitrary list of) coded values in a given 'content' table with their descriptions from the parameter table. I don't want to alter the original data, I just want to display friendly results.
Is there a standard way to do this?
Can it be accomplished with SELECT or are other statements required?
Here is a sample query for a single coded field:
SELECT TOP (5)
newid() as id,
B.BRIDGE_STATUS,
P.SHORTDESC
FROM
BRIDGE B
LEFT JOIN PARAMTRS P ON P.TABLE_NAME = 'BRIDGE'
AND P.FIELD_NAME = 'BRIDGE_STATUS'
AND P.PARMVALUE = B.BRIDGE_STATUS
ORDER BY
id
I want to produce 'decoded' results like:
| id | BRIDGE_STATUS |
|--------------------------------------|------------ |
| BABCEC1E-5FE2-46FA-9763-000131F2F688 | Active |
| 758F5201-4742-43C6-8550-000571875265 | Active |
| 5E51634C-4DD9-4B0A-BBF5-00087DF71C8B | Active |
| 0A4EA521-DE70-4D04-93B8-000CD12B7F55 | Inactive |
| 815C6C66-8995-4893-9A1B-000F00F839A4 | Proposed |
Rather than original, coded data like:
| id | BRIDGE_STATUS |
|--------------------------------------|---------------|
| F50214D7-F726-4996-9C0C-00021BD681A4 | 3 |
| 4F173E40-54DC-495E-9B84-000B446F09C3 | 3 |
| F9C216CD-0453-434B-AFA0-000C39EFA0FB | 3 |
| 5D09554E-201D-4208-A786-000C537759A1 | 1 |
| F0BDB9A4-E796-4786-8781-000FC60E200C | 4 |
but for an arbitrary number of columns.
I'm not sure how to call what I'm trying to do, so trying to look it up didn't work very well. I would like to aggregate my table based on one column and have all the rows from another column collapsed into an array by unique ID.
| ID | some_other_value |
-------------------------
| 1 | A |
| 1 | B |
| 2 | C |
| .. | ... |
To return
| ID | values_array |
-------------------------
| 1 | {A, B} |
| 2 | {C} |
Sorry for the bad explanation, I'm really lacking the vocabulary here. Any help with writing a query that achieves what's in the example would be very much appreciated.
Try the following.
select id, array_agg(some_other_value order by some_other_value ) as values_array from <yourTableName> group by id
You can also check here.
See Aggregate Functions documentation.
SELECT
id,
array_agg(some_other_value)
FROM
the_table
GROUP BY
id;
I have a list of survey results that looks similar to the following:
| Email | Question 1 | Question 2 |
| ----------------- | ---------- | ---------- |
| test#example.com | Always | Sometimes |
| test2#example.com | Always | Always |
| test3#example.com | Sometimes | Never |
Question 1 and Question 2 (and a few others) have the same discrete set of values (from a dropdown list on the survey).
I want to show the data in the following format in Tableau (a table is fine, but a heatmap or highlight table would be best):
| | Always | Sometimes | Never |
| ---------- | ------ | --------- | ----- |
| Question 1 | 2 | 1 | 0 |
| Question 2 | 1 | 1 | 1 |
How can I achieve this? I've tried various combinations of rows and columns and I just can't seem to get close to this layout. Do I need to use a calculated value?
As far as I know - it is not natively possible with Tableau, because what you have is kind of a pivot table.
What you can do is unpivot the whole table as explained here https://stackoverflow.com/a/20543651/5130012, then you can load the data into Tableau and create the table you want.
I did some dummy data and tried it.
That's my "unpivoted" table:
Row,Column,Value
test,q1,always
test,q2,sometimes
test1,q1,sometimes
test1,q2,never
test10,q1,always
test10,q2,always
test11,q1,sometimes
test11,q2,never
And that's how it looks in Tableau:
I've just upgraded to Postgresql 9.3beta. When I apply json_each or json_each_text functions to a json column, the result is a set of rows with column names 'key' and 'value'.
Here's an example:
I have a table named customers and education column is of type json
Customers table is as follows:
----------------------------------------------------------------------
| id | first_name | last_name | education |
---- ------------ ----------- ----------------------------------------
| 1 | Harold | Finch | {\"school\":\"KSU\",\"state\":\"KS\"} |
----------------------------------------------------------------------
| 2 | John | Reese | {\"school\":\"NYSU\",\"state\":\"NY\"} |
----------------------------------------------------------------------
The query
select * from customers, json_each_text(customers.education) where value = 'NYSU'
returns a set of rows with the following column names
---------------------------------------------------------------------------------------
| id | first_name | last_name | education | key | value |
---- ------------ ----------- ---------------------------------------- -------- -------
| 2 | John | Reese | {\"school\":\"NYSU\",\"state\":\"NY\"} | school | NYSU |
---------------------------------------------------------------------------------------
because json_each_text function returns the set of rows with key and value column names by default.
However, I want json_each_text to return custom column names such as key1 and key2:
-----------------------------------------------------------------------------------------
| id | first_name | last_name | education | key1 | value1 |
---- ------------ ----------- ---------------------------------------- -------- ---------
| 2 | John | Reese | {\"school\":\"NYSU\",\"state\":\"NY\"} | school | NYSU |
-----------------------------------------------------------------------------------------
Is there a way to get different column names like 'key1' and 'value1' after applying those functions?
You can solve that by using AS in FROM and SELECT clause:
postgres=# SELECT json_data.key AS key1,
json_data.value AS value1
FROM customers,
json_each_text(customers.education) AS json_data
WHERE value = 'NYSU';
key1 | value1
--------+--------
school | NYSU
(1 row)