I am currently creating a Form on access however when I go into form view I am unable to update any of the current records or add new records would anyone have any idea why I cant edit or update my records? or how I go about fixing this?
My Form is currently linked to a query, and incase it matters I also have inlcuded a search function in my form.
Any help/advice would be greatly appreciated.
Thanks
Paula
It means that the query you use is not updateable. There are a lot of limitations for updateable queries design, for instance you cannot use aggregaing in query, joins should have unique keys etc. Try to redesign your query. As a workaround you can copy the data from your query to temporary table, edit the data in this table and then copy data back to main table(s)
Your query is probably not updateable. You can check this by simply opening the query directly, and trying to edit/add data.
The most common reason is a JOIN on non-indexed columns.
For more reasons see: Dealing with Non-Updateable Microsoft Access Queries
or Allen Browne: Why is my query read-only?
Related
I'm trying to write a spark dataframe into a postgresql table by using df.write.jdbc.
The problem is, I want to make sure to not lose existing data already inside the table (Using SaveMode.Append) but also making sure to avoid inserting duplicate data already inserted into it.
So, if I use SaveMode.Overwrite:
-The table gets dropped losing all previous data
If I use SaveMode.Append:
The table doesn't get dropped but the duplicate records get inserted.
If I use this mode together with a primary key already into the db (that would provide the unique constraint) it returns an error.
Is there some kind of option to solve this?
Thanks
What I did was to filter out existing records, that means an additional read to get existing ids, and a fitler operation on data to append.. but it does the job for me.
There's I think a more complex solution in this post:
https://medium.com/#thomaspt748/how-to-upsert-data-into-a-relational-database-using-apache-spark-part-1-python-version-b43b9761bbf2
Maybe late, but just went into this.
Not able to load multiple tables, getting error:
Exception in component tMysqlInput_1 (MYSQL_DynamicLoading)
java.sql.SQLException: Bad format for Timestamp 'GUINESS' in column 3
One table works fine. Basically after first iteration the second table trying to use the schema
of the first table. Please help, how to edit the component to make it
correct. Trying to load actor & country table from sakila DB mysql to
a another DB on the same server. Above image is for successful one table
dynamic loading.
you should not use tMysqlInput if output schemas differ. For this case there is no way around tJavaRow and custom code. I however cannot guess what happens in tMap, so you should provide some more details about what you want to achieve.
If all you need is to load data from one table to another without any transformations, you can do one of the following:
If your tables reside in 2 different databases on the same server, you can use a tMysqlRow and execute a query "INSERT INTO catalog.table SELECT * from catalog2.table2..". You can do some simple transformations in SQL if needed.
If your tables live in different servers, check the generic solution I suggested for a similar question here. It may need some tweaking depending on your use case, but the general idea is to replicate the functionality of INSERT INTO SELECT when the tables are not on the same server.
I want to create MS Access query object through Perl. I can query on tables but not able to find how to access already existing query objects present in Access database.
I also want to create new query objects through Perl and save it with a name.
Any help is much appreciated.
The below link may be helpful for you:
Class::DBI::MSAccess
I was wondering whether it is possible to query tables by specifying their object_id instead of table names in SELECT statements.
The reason for this is that some tables are created dynamically, and their structure (and names) are not known before, and yet I would like to be able to write sprocs that are capable of querying these tables and working on their content.
I know I can create dynamic statements and execute it, but maybe there are some better ways, and I would be grateful if someone could share how to approach it.
Thanks.
You have to query sys.columns and build a dynamic query based on that.
There are no better ways: SQL isn't designed for adhoc or unknown sturctures.
I've never worked on an application in 20 years where I don't know what my data looks like. Either your data is persisted or it should be in XML or JSON or such if it's transient-
I'm a newbie to pgsql. I have few questionss on it:
1) I know it is possible to access columns by <schema>.<table_name>, but when I try to access columns like <db_name>.<schema>.<table_name> it throwing error like
Cross-database references are not implemented
How do I implement it?
2) We have 10+ tables and 6 of have 2000+ rows. Is it fine to maintain all of them in one database? Or should I create dbs to maintain them?
3) From above questions tables which have over 2000+ rows, for a particular process I need a few rows of data. I have created views to get those rows.
For example: a table contains details of employees, they divide into 3 types; manager, architect, and engineer. Very obvious thing this table not getting each every process... process use to read data from it...
I think there are two ways to get data SELECT * FROM emp WHERE type='manager', or I can create views for manager, architect n engineer and get data SELECT * FROM view_manager
Can you suggest any better way to do this?
4) Do views also require storage space, like tables do?
Thanx in advance.
Cross Database exists in PostGreSQL for years now. You must prefix the name of the database by the database name (and, of course, have the right to query on it). You'll come with something like this:
SELECT alias_1.col1, alias_2.col3 FROM table_1 as alias_1, database_b.table_2 as alias_2 WHERE ...
If your database is on another instance, then you'll need to use the dblink contrib.
This question doe not make sens. Please refine.
Generally, views are use to simplify the writing of other queries that reuse them. In your case, as you describe it, maybe that stored proceudre would better fits you needs.
No, expect the view definition.
1: A workaround is to open a connection to the other database, and (if using psql(1)) set that as your current connection. However, this will work only if you don't try to join tables in both databases.
1) That means it's not a feature Postgres supports. I do not know any way to create a query that runs on more than one database.
2) That's fine for one database. Single databases can contains billions of rows.
3) Don't bother creating views, the queries are simple enough anyway.
4) Views don't require space in the database except their query definition.