I would like to create a column on an in-memory table that generates a colour HEX code based on a person's name (another column). A quick google didn't give much so wondered if any pointers can be given here.
e.g
update colour: <some code and use username col as input> from table
In kdb+ you can run a function on a column via an update statement but there are slight differences depending on whether the function is vectorised or not. If vectorised:
update colour:{<some code>}[username] from table
update colour:someFunction[username] from table
If not vectorised then an iterator like each ' is required
update colour:{<some code>}'[username] from table
update colour:someFunction'[username] from table
This function will generate hex codes from the first 3 characters of a string.
q)hex:{a:i-16*j:(i:`int$3#x)div 16;"0123456789ABCDEF"raze(j-16*j div 16),'a}
q)hex"Hello"
"48656C"
q)update colour:hex'[username] from table
in the first function, I'm making the job column lowercase and then searching through but it's not finding any data. Why? Thanks. Just FYI since you don't have the database, all records in the JOB column are uppercase (that's why isn't returning anything), but that's also why I'm making it lowercase first.
In the second function, I'm trying to concat only ename with specific criteria --anything that has an r in the ENAME column (there are multiple records with the r in it), but isn't working (no data found), why? How do I get it done? Thanks.
SELECT LOWER(JOB) FROM EMP
WHERE JOB = LOWER('MANAGER');
SELECT CONCAT('My name is ',ename)
FROM EMP
WHERE ENAME LIKE '%r%';
I tested both of your SQL statements and they work fine for me. Are you sure the records are in db? Are you sure the names of the rows are correct?
EDIT : OK, so the name of column is in lower case but in your WHERE its in uppercase. Thats all :)
If I create a table with:
t = table(magic(3));
I get a table with a Singular Variable Name
However if I:
a = magic(3);
T = array2table(a);
Then I get a table with Three Variable Names:
If I try to group the columns by sending it only one variable name for the table:
T.Properties.VariableNames = {'OneName'};
The VariableNames property must contain one name for each variable in the table.
In the second situation, there is an option to combine the columns into one column manually by highlighting the columns and right clicking on the mouse.
How can I programmatically group the three variables to become one Variable as in the first example if I already created the matrix a ?
EDIT:
*as in the first example if I already created the table a ?
I am using R2017b
Based on the comment below, I am asking how to do mergevars prior to R2018a.
In the above example, I would be able to group them into one variable with:
t = table(a);
In other words, I hoped to create multiple multicolumn variables. In other-other words, to do mergevars prior to R2018a.
Once the table T has been created with a variable name for each column, the column values could be extracted, and then assigned back to T:
b = T{:, 1:2};
c = T{:, 3};
T = table(b, c);
I'm new to PostgreSQL and I am working on a function to return the word locations for a searched word.
I want to first narrow down the text fields the search has to go though to make sure it is a relevant result from the database.
My table name is 'testing' then the text field column is called 'context' and the line number where it is located is called 'line_number'. Where the context text is associated with a specific line_number.
Right now my ranking code looks like this:
select line_number into lineLocation
from (
SELECT
testing.line_number,
ts_rank_cd(to_tsvector('english', testing.context),
to_tsquery('Cats & Dogs & Kids')) AS score
FROM Testing
) ranking
WHERE score >0
ORDER BY score DESC;
Return QUERY select * from lineLocation;
When I try to print out lineLocation as a return query, it works in reporting the new ranked line numbers 22,19,21,20,17,13 each returned in their own column.
My problem now is that I want to search each of those lines (22 ... 13) for a key word like "dog" and return its position
Obtaining the text for that by using:
select context into sample from testing
where testing.line_number = lineLocation;
If I try to just decrement the lineLocation in a loop like lineLocation -i
It goes out of order, and will eventually search context that is not relevant.
Is there any type of 'read next line' function I could use?
I am looking for a way to loop through the ranked result line numbers
EDIT I then go on to use a for loop where I want it to read through all of the rows of text in the column context from the ranked results
The problem I am having with this is that it only reads the first row of text in the column 'context' and I need it to look at all of the rows that are returned by the ranked search
Ended up creating a ranking function of its own, and inserting the results of that text search into another table with a serial increment column.
filled the values of the new table (ranked_results) with this code:
INSERT INTO ranked_results(sentence) VALUES (columnRanking());
I also had to create a function to delete/reset the columns in the new table upon insertion of more lines.
TRUNCATE table ranked_results RESTART IDENTITY;
I have some bulk data in a text file that I need to import into a MySQL table. The table consists of two fields ..
ID (integer with auto-increment)
Name (varchar)
The text file is a large collection of names with one name per line ...
(example)
John Doe
Alex Smith
Bob Denver
I know how to import a text file via phpMyAdmin however, as far as I understand, I need to import data that has the same number of fields as the target table. Is there a way to import the data from my text file into one field and have the ID field auto-increment automatically?
Thank you in advance for any help.
Another method I use that does not require reordering a table's fields (assuming the auto-increment field is the first column) is as follows:
1) Open/import the text file in Excel (or a similar program).
2) Insert a column before the first column.
3) Set the first cell in this new column with a zero or some other placeholder.
4) Close the file (keeping it in its original text/tab/csv/etc. format).
5) Open the file in a text editor.
6) Delete the placeholder value you entered into the first cell.
7) Close and save the file.
Now you will have a file containing each row of your original file preceded by an empty column, which will be converted into the next relevant auto-increment value upon import via phpMyAdmin.
Here is the simplest method to date:
Make sure your file does NOT have a header line with the column names. If it does, remove it.
In phpMyAdmin, as usual: go in the Import tab for your table and select your file. Select CSV as the format. Then -- and this is the important part -- in the
Format-Specific Options:
...in the Column names: fill in the name of the column the data is for, in your case "Name".
This will import the names and auto-increment the id column. You're done!
Tested fine with phpMyAdmin 4.2.7.1.
Not correct on import with the LOADTABLE INFILE, just create the auto-increment column as the LAST column/field... As its parsing, if your table is defined with 30 columns, but the text file only has 1 (or anything less), it will import the leading columns first, in direct sequence, so ensure your delimited with... is correct between fields (for any future imports). Again, put the auto-increment AFTER the number of columns you know are being imported.
create table YourMySQLTable
( FullName varchar(30) not null ,
SomeOtherFlds varchar(20) not null,
IDKey int not null AUTO_INCREMENT,
Primary KEY (IDKey)
);
Notice the IDKey is auto-increment in the last field of the table... regardless of your INPUT stream text file which may have less columns than your final table will actually hold.
Then, import the data via...
LOAD DATA
INFILE `C:\SomePath\WhereTextFileIs\ActualFile.txt`
INTO TABLE YourMySQLTable
COLUMNS TERMINATED BY `","`
LINES TERMINATED BY `\r\n` ;
Above example is based on comma seperated list with quotes around each field such as
"myfield1","anotherField","LastField". Also, the terminated is the cr/lf that typical text files are delimited per row
In the sample of your text file having the full name as the single column, all the data would get loaded into the "YourMySQLTable" into the FullName column. Since the IDKey is at the END of the list, it will still be auto-increment assigned values from 1-? and not have any conflict with the columns from the inbound text.
I just used a TAB as the first field in my text file, then imported it as usual. I got a warning about the ID field but the field incremented as expected...
I just tried this:
In phpMyAdmin table- match the amount of fields you have in your csv.
Perform the import of csv data into your table
Go to the [Structure] tab and add a new field [At beginning of table] (I assume you want the id field there)
Fill in the [name] attribute as "id",
[length] to "5"
[Index] to "Primary"
Tick the A_I (Auto Increment)
Hit [Go] button
The table should have updated with the id field at the front of all your data with auto-incrementing.
At least this way you don't have to worry about matching fields, etc.
I´ve solved that problem by simply add the column_names under Format-Specific Options without the Column ID. Because the Column ID ist Auto increment. In my case it works fine without changing anything in the CSV File. My CSV File has only Data inside no Column Headers.
If the table columns do not match, I usually add "bogus" fields with empty data where the real data would've been, so, if my table needs "id", "name", "surname", "address", "email" and I have "id", "name", "surname", I change my CSV file to have "id", "name", "surname", "address", "email" but leave the fields that I do not have data for blank.
This results in a CSV file looking like this:
1,John,Doe,,
2,Jane,Doe,,
I find it simpler than the other methods.