Can FileMaker Server 13 import records from a table within the same file? - filemaker

I have a script that works correctly on a server machine when running within FileMaker Pro 13, but raises errors when run within FileMaker Server 13. Both are running under Windows. The portion that is raising the error is an Import Record script step that imports from one table within a file into another table within the same file.
The error returned is 100, "File is missing," so I'm wondering if this is something not supported when running a script within FileMaker Server. If that's the case, I'm thinking that perhaps exporting the records to a temporary file and importing from that might be a workaround, but before I start down that road, I want to check and see if I'm missing something.

The short answer is: no, Server Side Scripts can't import from a FileMaker file. From FM's help site: http://help.filemaker.com/app/answers/detail/a_id/7035/~/import%2Fexport-script-on-filemaker-server
Importing/exporting directly to and from another FileMaker Pro file is not supported via a FileMaker Server scheduled script.
Yes, exporting to an .xlsx, .csv or .txt file in the temporary directory is a common work-around. I use it frequently. If you want to avoid a temporary file, you can also grab all of the indices to a variable and loop through them, creating records. HyperLists come in handy for this.

Related

Export a CSV file from AS400 to my pc through Cl program

I want to export a database file that is created through a query, from the AS400 machine to my pc in the form of a csv file.
Is there a way to create that connection of the AS400 and my pc through a cl program?
An idea of what I want to do can be derived from the following code:
CLRPFM DTABASENAME
RUNQRY QRY(QRYTEST1)
CHGVAR VAR(&PATH) VALUE('C:\TESTS')
CHGVAR VAR(&PATH1) VALUE('C:\TESTS')
CHGVAR VAR(&CMD) VALUE(%TRIM(&PATH) *CAT '/DTABASENAME.CSV' !> &PATH !> &PATH1)
STRPCO PCTA(*YES)
STRPCCMD PCCMD(&CMD) PAUSE(*YES)
where I somehow get my database file, give the path that I want it to be saved in, in my pc , and lastly run the pc command accordingly
Take a look at
Copy From Query File (CPYFRMQRYF)
Which will allow you to create a database physical file from the query.
You may also want to look at
Copy To Import File (CPYTOIMPF)
Which will copy data from a database physical file to an Integrated File System (IFS) stream file (such as .CSV); which are the type of files you'd find on a PC.
ex:
CPYTOIMPF FROMFILE(MYLIB/MYPF) TOSTMF('/home/myuser/DTABASENAME.CSV') RCDDLM(*CRLF) DTAFMT(*DLM) STRDLM(*DBLQUOTE) STRESCCHR(*STRDLM) RMVBLANK(*TRAILING)
FLDDLM(',')
However, there's no single command to transfer data to your PC. Well technically, I suppose that's not true. If you configure a (SMB or NFS) file share on your PC and configure the IBM SMB or NFS client; you could in fact CPYTOIMPF directly to that file share or use the Copy Object (CPY) command to copy from the IFS to the network share.
If your PC has an FTP server available, you could send the data via the IBM i's FTP client. Similarly, if you have a SSH server on your PC, OpenSSL is available via PASE and SFTP or SCP could be used. You could also email the file from the i.
Instead of trying to send the file to your PC from the i. An easier solution would be to kick off a process on the PC that runs the download. My preference would be a Access Client Solution (ACS) data transfer.
You configure and save (as a .dtfx file) the transfer
Then you can kick it off with a
STRPCCMD cmd('java -jar C:\ACS\acsbundle.jar /plugin=download C:\testacs.dtfx')
More detailed information can be found in the Automating ACS Data Transfer document
The ACS download compoent is SQL based, so you could probably remove the need to use Query/400 at all
Assuming that you have your IFS QNTC mapped to your network domain. You could use the command CPYTOIMPF to copy the data directly from an IBMI DB2 file to a network directory.
This sample would result in a CSV file.
CPYTOIMPF FROMFILE(file) TOSTMF('//QNTC/servername or ip/path/filename.csv') STMFCCSID(*PCASCII) RCDDLM(*CRLF) STRDLM(*NONE)
Use the FLDDLM(';') option in addition to make semicolon separated values, omit it to use comma as value separator.

robocopy error with ERROR 32 (0x00000020)

I have two drives A and B. Using a python script I am creating some files in "A" drive and I am running a powerscript which copies all the files in the drive A to drive B in the interval of 1 sec.
I am getting this error in my powershell.
2015/03/10 23:55:35 ERROR 32 (0x00000020) Time-Stamping Destination
File \x.x.x.x\share1\source\ Dummy_100.txt The process cannot access
the file because it is being used by another process. Waiting 30
seconds...
How will I overcome this error?
This happened is because the file is locked by running process. To fix this, download Process Explorer. Then use Find>Find Handle or DLL, find out which process locked this file. Use 'taskkill' to kill that process in commandline. You will be fine.
if you want to skip this files you can use /r:n that n is times of tries
for example /w:3 /r:5 will try 5 time every 3 seconds
How will I overcome this error?
If backup is, what you got in mind, and you encounter in-use files frequently, you look into Volume Shadow Copies (VSS), which allow to copy files despite them being ‘in use’. It's not a product, but a windows technology used by various backup tool.
Sadly, it's not built into robocopy, but can be used in conjunction with it. See
➝ https://superuser.com/a/602833/75914
and especially:
➝ https://github.com/candera/shadowspawn
It could be many reasons.
In my case, I was running a CMD script to copy from one server to another, a heap of SQL Server backups and transaction logs. I too had the same problem because it was trying to write into a log file that was supposedly opened by another process. It was not.
I ran many IP checks and Process ID checkers that I ran out of knowing what was hogging the log file. Event viewer said nothing.
I found out it was not even the log file that was being locked. I was able to delete it by logging into the server as a normal user with no admin privileges!
It was the backup files themselves by the SQL Server Agent. Like #Oseack said, there may have been the need to use another tool whilst the backup files themselves were still being used or locked by the SQL Server Agent.
The way I got around it was to force ROBOCOPY to wait.
/W:5
did it.

How to find out why import fails on Google Cloud SQL

I generate a .sql file, on my laptop, that contains around 11 million insert statements into several tables.
Locally I run a MySQL database, into which I import this file. It takes a while, but it succeeds without any problems. The local MySQL version is:
mysql Ver 14.14 Distrib 5.6.16, for osx10.7 (x86_64) using EditLine wrapper
I want to import this file into a Google Cloud SQL instance. To do so, I first gzip the .sql file and upload it to a bucket in Google Cloud Storage.
Then I create a D0 pay-per-use instance (the least powerful / cheapest). I click 'Import' and enter the name of the file on cloud storage.
The import starts, but after a while (around a day) the import fails, stating: An unknown error occurred.
I tried this using a MySQL 5.5 and an experimental 5.6 instance, both fail at different inserts. (I can see what the latest successful insert was).
My problem is, I cannot find out what MySQL thinks is the problem.
How can I ask the Google developer console to show me a log? I tried on the Google APIs page which has a 'Logs' tab, but it gives me An error has occurred. Please retry later.
Maybe Google Cloud SQL has some limits on the insert statements that my local MySQL does not have?
One of the fields is a MEDIUMTEXT, which I believe can be larger than 65.536 bytes.
Any advice is appreciated.
---------- UPDATE -----------
I mailed with the cloud-sql team and they confirmed the problem was that the import timed out.
So indeed, 24 hours is the maximum time an import may take on Cloud SQL.
Solutions are: use a more powerful instance for the import (and use asynchronous replication), or split up the .sql in multiple parts.
Another approach is to use several values per insert statement, just make sure the line does not exceed 4MB. This is what the value of max_allowed_packet is on cloud sql. It speeds up the insert greatly.
In fact, this makes it possible to have the D0 instance import the file in a few hours, so I don't need to bump it to a more powerful one.

SQL Service Broker creating objects in SQL Server Database Project in VS 2012

So I've started a SQL Server database project inside VS 2012. I have done this for other databases already but not related to Service Broker.
For testing I had already created db, queues, etc through a T-SQL script including Message Types which was in an XML format. i.e.
[//blah.com/Items/RequestItem]
When I try to do something like this in the DB Project it's not allowing me too due to special chars.
Anyone done this? Gotten around it?
Is there a way to simply put my already created T-SQL file in the database project and have it use it?
See my comment above. I was able to import the script by Right clicking on the database Project.

Perl local libraries - Sybase

I'm going to build a extremly small script for dumping a Sybase database in perl. The problem is that Perl doesn't come with preinstalled Sybase-support. I don't have access to the servers root so I can't install any packages and I can't reach the perl-folder. The server is not configured for internet access so I have to deliver the packages "manually" thorugh FTP.
So, my question is if there are any easy ways of doing this. The only library I need is DBI::Sybase or Sybase standalone (maybe I haven't done my research enough and doesn't even need this much?) which means I would love to just be able to put the .pm file there, loading it through
use localModule
and then run my small script.
The solution has to work on both Red hat and Solaris if I understood my supervisor correctly.
Best regards
Since you are primarily concerned with dumping the database, and not data retrieval and manipulation, you could probably get by without having to use DBI::Sybase or other perl module that is not preinstalled.
Without more details, it's hard to be very specific, but here's the overview. Your perl script can execute some SQL scripts which can dump the databases.
You can either put the list of databases you wish to dump in a config file (or env file), or you can generate it dynamically by calling isql using the -b option to suppress headers, and nocount to suppress footers, and store the output in an array.
Once you have the list of databases, just loop them, running another isql command to dump each database.