I could have posted this to SQL forums, but I rather look for an idea or best practice, that is why I have chosen this forum.
I have got an integer column in SQL called Payroll Number and it is unique to employee. we will be interrogating employee information from this system via SQL views and put into another system, but we dont want payroll numbers to be appeared as they are on this system. Therefore, we need to hash those payroll numbers on SQL so that views will serve hashed user-friendly numbers.
I spent quite a lot of time reading encryption techniques in SQL, but they are using complex algorithms to hash data and produce binary. But what I am after is less complext and obfuscating a number rather than hashing.
For instance, payroll number is 6 characters long(145674), I want to be able to generate maybe 9-10 characters long integer number from this number and use it on other systems.
I had a look at XOR'ing but I need something more robust and elegant.
How do you guys do these things? Do you write your simple algorithm obfuscate your integers? I need to do this on SQL leve, what do you suggest?
Thanks for your help
Regards
It is not hard to hash a value but it is hard to hash a value and be sure of uniqueness and have it be a number. However, I do have a cross database solution.
Make a new table - with two columns, id (auto generated from random starting point) and payroll id.
Everytime you need to use a user externally insert them into this table. This will give you a local unique id you can use (internally and externally) but it is not the payroll id.
In fact, if you have an internal id already (eg user id from the user table) just use that. There is no advantage to hashing this value if it is never decoded. However, you can use the autogen of id as your random unique hash -- it has all the properties you need.
Related
I know there are many questions on SALT and hashing passwords, but I have yet to find a tutorial to walk me through this in VS using the MVC pattern.
I currently have a DB created with a user table containing three columns:
userID(PK, int, not null)
password(varchar(45), not null)
loginID(varchar(8), null)
The password is saved as a visible string in the DB. After researching the issue, I assume password is easiest as binary instead of varchar. Does anyone know of a good tutorial to implement hashing and SALT into my program? One that clearly defines this in terms of the MVC pattern is preferred.
MVC doesn't have anything to do with salting your passwords, although someone might point to the proper libraries that might be used with your tech stack.
Salting involves using a specific sequence, and appending that to the end of user passwords, and then hashing that data.
The reason this is done is because a hash algorithm applies on a well known string is easily reversible. A person could, for example, use well known hash algorithms against a whole dictionary, and compare to user passwords to determine what it was hashed from. While a good hash function is a one way function (aka can't find the input based on the output), if you had a dictionary to map you could easily do it for well known strings/ string combinations.
For example, the password password has a well known hash. When you attach a random sequence to the end (or start) and then hash that, it's a significantly less common hash as a result, and then it's significantly harder to reverse.
Sorry for not having the specific technologies related, but I wanted to communicate the general higher level concept of it since the over-focus on the technologies loses the bigger picture.
Our current PostgreSQL database is using GUID's as primary keys and storing them as a Text field.
My initial reaction to this is that trying to perform any kind of minimal cartesian join would be a nightmare of indexing trying to find all the matching records. However, perhaps my limited understanding of database indexing is wrong here.
I'm thinking that we should be using UUID as these are stored as a binary representation of the GUID where a Text is not and the amount of indexing that you get on a Text column is minimal.
It would be a significant project to change these, and I'm wondering if it would be worth it?
When dealing with UUID numbers store them as data type uuid. Always. There is simply no good reason to even consider text as alternative. Input and output is done via text representation by default anyway. The cast is very cheap.
The data type text requires more space in RAM and on disk, is slower to process and more error prone. #khampson's answer provides most of the rationale. Oddly, he doesn't seem to arrive at the same conclusion.
This has all been asked and answered and discussed before. Related questions on dba.SE with detailed explanation:
Would index lookup be noticeably faster with char vs varchar when all values are 36 chars
What is the optimal data type for an MD5 field?
bigint?
Maybe you don't need UUIDs (GUIDs) at all. Consider bigint instead. It only occupies 8 bytes and is faster in every respect. It's range is often underestimated:
-9223372036854775808 to +9223372036854775807
That's 9.2 millions of millions of millions positive numbers. IOW, nine quintillion two hundred twenty-three quadrillion three hundred seventy-two trillion thirty-six something billion.
If you burn 1 million IDs per second (which is an insanely high number) you can keep doing so for 292471 years. And then another 292471 years for negative numbers. "Tens or hundreds of millions" is not even close.
UUID is really just for distributed systems and other special cases.
As #Kevin mentioned, the only way to know for sure with your exact data would be to compare and contrast both methods, but from what you've described, I don't see why this would be different from any other case where a string was either the primary key in a table or part of a unique index.
What can be said up front is that your indexes will probably larger, since they have to store larger string values, and in theory the comparisons for the index will take a bit longer, but I wouldn't advocate premature optimization if to do so would be painful.
In my experience, I have seen very good performance on a unique index using md5sums on a table with billions of rows. I have found it tends to be other factors about a query which tend to result in performance issues. For example, when you end up needing to query over a very large swath of the table, say hundreds of thousands of rows, a sequential scan ends up being the better choice, so that's what the query planner chooses, and it can take much longer.
There are other mitigating strategies for that type of situation, such as chunking the query and then UNIONing the results (e.g. a manual simulation of the sort of thing that would be done in Hive or Impala in the Hadoop sphere).
Re: your concern about indexing of text, while I'm sure there are some cases where a dataset produces a key distribution such that it performs terribly, GUIDs, much like md5sums, sha1's, etc. should index quite well in general and not require sequential scans (unless, as I mentioned above, you query a huge swath of the table).
One of the big factors about how an index would perform is how many unique values there are. For that reason, a boolean index on a table with a large number of rows isn't likely to help, since it basically is going to end up having a huge number of row collisions for any of the values (true, false, and potentially NULL) in the index. A GUID index, on the other hand, is likely to have a huge number of values with no collision (in theory definitionally, since they are GUIDs).
Edit in response to comment from OP:
So are you saying that a UUID guid is the same thing as a Text guid as far as the indexing goes? Our entire table structure is using Text fields with a guid-like string, but I'm not sure Postgre recognizes it as a Guid. Just a string that happens to be unique.
Not literally the same, no. However, I am saying that they should have very similar performance for this particular case, and I don't see why optimizing up front is worth doing, especially given that you say to do so would be a very involved task.
You can always change things later if, in your specific environment, you run into performance problems. However, as I mentioned earlier, I think if you hit that scenario, there are other things that would likely yield better performance than changing the PK data types.
A UUID is a 128-bit data type (so, 16 bytes), whereas text has 1 or 4 bytes of overhead plus the actual length of the string. For a GUID, that would mean a minimum of 33 bytes, but could vary significantly depending on the encoding used.
So, with that in mind, certainly indexes of text-based UUIDs will be larger since the values are larger, and comparing two strings versus two numerical values is in theory less efficient, but is not something that's likely to make a huge difference in this case, at least not usual cases.
I would not optimize up front when to do so would be a significant cost and is likely to never be needed. That bridge can be crossed if that time does come (although I would persue other query optimizations first, as I mentioned above).
Regarding whether Postgres knows the string is a GUID, it definitely does not by default. As far as it's concerned, it's just a unique string. But that should be fine for most cases, e.g. matching rows and such. If you find yourself needing some behavior that specifically requires a GUID (for example, some non-equality based comparisons where a GUID comparison may differ from a purely lexical one), then you can always cast the string to a UUID, and Postgres will treat the value as such during that query.
e.g. for a text column foo, you can do foo::uuid to cast it to a uuid.
There's also a module available for generating uuids, uuid-ossp.
I have a large geospatial data set (~30m records) which I am currently importing into a PostgreSQL database. I need a unique ID to assign to each record, but an incrementing integer might be a bad idea because it could not be reliably recreated if I ever needed to reimport the data set.
It seems that a unique hash of the geometry data in a determined projection might be the best option for a reliable identifier. Being able to calculate the hash within Postgres would be beneficial, and speed would also be of benefit.
What is/are my options given this situation? Is there a particular method that is highly suitable for this situation?
If you need a unique identifier that depends on (and can be recreated from) the data, the most straightforward option seems to be a MD5 hash, which is included in Posgresql (no need of additional libraries) and is quite efficient and -for this scenario- secure.
The pgcrypto module provides additional hashing algorithms, eg SHA1.
Of course, you need to assert that the data to be hashed is unique.
I recently had to propose a set of new Postgres tables to our DB team that will be used by an application I am writing. They failed the design because my table had fields that were listed like so:
my_table
my_table_id : PRIMARY KEY, AUTO INCREMENT INT
some_other_table_id, FOREIGN KEY INT
some_text : CHARACTER VARYING(100)
some_flag : BOOLEAN
They said that the table would not be optimal because some_text appears before some_flag, and since CHARACTER VARYING fields search slower than BOOLEANs, when doing a table scan, it is faster to have a table structure whose columns are sequenced from greatest precision to least precision; so, like this:
my_table
my_table_id : PRIMARY KEY, AUTO INCREMENT INT
some_other_table_id, FOREIGN KEY INT
some_flag : BOOLEAN
some_text : CHARACTER VARYING(100)
These DBAs come from a Sybase background and have only recently switched over as our Postgres DBAs. I am thinking that this is perhaps a Sybase optimization that doesn't apply to Postgres (I would think Postgres is smart enough to somehow not care about column sequence).
Either way I can't find any Postgres documentation that confirms or denies. Looking for any battle-worn Postgres DBAs to weigh-in as to whether this is a valid or bogus (or conditionally-valid!) claim.
Speaking from my experience with Oracle on similar issues, where there was a big change in behaviour between versions 9 and 10 (or 8 and 9) if memory serves (due to CPU overhead in finding column data within a row), I don't believe you should rely on documented behaviour for an issue like this when a practical experiment would be fairly straightforward and conclusive.
So I'd suggest that you create a test case for this. Create two tables with exactly the same data and the columns in a different order, and run repeated and varied tests. Try to implement the entire test as a single script that can be run on a development or test system and which tells you the answer. Maybe the DBA's are right, and you can say, "Hey, confirmed your thoughts on this, thanks a lot", or you might find no measurable and significant difference. In the latter case you can hand the entire test to the DBA's, and explain how you can't reproduce the problem. Let them run the tests.
Either way, someone is going to learn something, and you've got a test case you can apply to future (or past) versions.
Lastly, post here on what you found ;)
What your DBA's are probably referring to, is the access strategy for "gettting to" the boolean value in a given tuple (/row).
In THEIR proposed design, a system can "get to" that value by looking at byte 9.
In YOUR proposed design, the system must first inspect the LENGTH field of all varying-length columns [that come before your boolean column], before it can know the byte offset where the boolean value can be found. That is ALWAYS going to be slower than "their" way.
Their consideration is one of PHYSICAL design (and it is a correct one). Damir's answer is also correct, but it is an answer from the perspective of LOGICAL design.
If the remark by your DBA's is really intended as "criticism of a 'bad' design", then they deserve to be pointed out that LOGICAL design is your job (and column order doesn't matter at that level), and PHYSICAL design is their job. And if they expect you to do the PHYSICAL design (their job) as well, then there is no longer any reason for the boss to keep them employed.
From a database design point, there is no difference between your design and what your DBA suggests -- your application should not care. In relational databases (logically) there is no such thing as order of columns; actually if order of columns matters (logically) it failed 1NF.
So, simply pass all create table scripts to your DBAs and let them implement (reorder columns) in any way they feel it is optimal on the physical level. You simply continue with the application.
Database design can not fail on order of columns -- it is simply not part of the design process.
Future users of large data banks must be protected from having to know
how the data is organized in the machine ...
... the problems treated hare are those of data independence -- the
independence of application programs and terminal activities from
growth in data types and changes ...
E.F. Codd ~ 1979
Changes to the physical level ... must not require a change to an
application ...
Rule 8: Physical data independence (E.F. Codd ~ 1985)
So here we are -- 33 years later ...
In a recent CODE Magazine article, John Petersen shows how to use bitwise operators in TSQL in order to store a list of attributes in one column of a db table.
Article here.
In his example he's using one integer column to hold how a customer wants to be contacted (email,phone,fax,mail). The query for pulling out customers that want to be contacted by email would look like this:
SELECT C.*
FROM dbo.Customers C
,(SELECT 1 AS donotcontact
,2 AS email
,4 AS phone
,8 AS fax
,16 AS mail) AS contacttypes
WHERE ( C.contactmethods & contacttypes.email <> 0 )
AND ( C.contactmethods & contacttypes.donotcontact = 0 )
Afterwards he shows how to encapsulate this in to a table function.
My questions are these:
1. Is this a good idea? Any drawbacks? What problems might I run in to using this approach of storing attributes versus storing them in two extra tables (Customer_ContactType, ContactType) and doing a join with the Customer table? I guess one problem might be if my attribute list gets too long. If the column is an integer then my attribute list could only be at most 32.
2. What is the performance of doing these bitwise operations in queries as you move in to the tens of thousands of records? I'm guessing that it would not be any more expensive than any other comparison operation.
If you wish to filter your query based on the value of any of those bit values, then yes this is a very bad idea, and is likely to cause performance problems.
Besides, there simply isn't any need - just use the bit data type.
The reason why using bitwise operators in this way is a bad idea is that SQL server maintains statistics on various columns in order to improve query performance - for example if you have an email column, SQL server can tell you roughly what percentage of values that email column are true and select an appropriate execution plan based on that knowledge.
If however you have flags column, SQL server will have absolutely no idea how many records in a table match flags & 2 (email) - it doesn't maintain these sorts of indexes. Without this sort of information available to it SQL server is far more likely to choose a poor execution plan.
And don't forget the maintenance problems using this technique would cause. As it is not standard, all new devs will probably be confused by the code and not know how to adjust it properly. Errors will abound and be hard to find. It is also hard to do reporting type queries from. This sort of trick stuff is almost never a good idea from a maintenance perspective. It might look cool and elegant, but all it really is - is clunky and hard to work with over time.
One major performance implication is that there will not be a lookup operator for indexes that works in this way. If you said WHERE contact_email=1 there might be an index on that column and the query would use it; if you said WHERE (contact_flags & 1)=1 then it wouldn't.
** One column stores one piece of information only - it's the database way. **
(Didnt see - Kragen's answer also states this point, way before mine)
In opposite order: The best way to know what your performance is going to be is to profile.
This is, most definately, an "It Depends" question. I personally would never store such things as integers. For one thing, as you mention, there's the conversion factor. For another, at some point you or some other DBA, or someone is going to have to type:
Select CustomerName, CustomerAddress, ContactMethods, [etc]
From Customer
Where CustomerId = xxxxx
because some data has become corrupt, or because someone entered the wrong data, or something. Having to do a join and/or a function call just to get at that basic information is way more trouble than it's worth, IMO.
Others, however, will probably point to the diversity of your options, or the ability to store multiple value types (email, vs phone, vs fax, whatever) all in the same column, or some other advantage to this approach. So you would really need to look at the problem you're attempting to solve and determine which approach is the best fit.