iOS ( iPhone / iPad ) - exporting csv from slite3 database - iphone

Does anyone know how to export a CSV file from a SQLite3 database in an iPhone app?

There are several iOS CSV libraries that can be used to export the data from the phone. This is a trivial data transformation task - you read the information line-by-line from the SQLite result set, and send it out into the CSV file. If you do not need to process the file after the complete read-in, then you should be able to stream the data as quickly as it is read.
Writing CSV files is mostly trivial - and can be implemented by someone without much effort. I'm certain there are libraries for iOS specifically, but I've used the python CSV export routines regularly, and they have been read by excel without much effort. You just have to be careful, as excel has the habit of interpreting the results once they are in there, making some accuracy calculations impossible.
the .csv file format is trivial, and I've implemented it several times in several languages. if you have numeric requirements, though, you will be spending a lot of time making sure that the program that you feed it into is accepting the numbers properly - and you will have to deal with 'I imported it into excel, saved it and the data is wrong now' bugs...

Related

How to clean files easily?

I am a Bi developer in Microsoft Sql server for a while now.
I have always worked with ether an almost clean data which is excel files with first row is the headings that does have too much rubbish in sheets(like irrelevant data, calculations and so on), text files with data separated by a comma (csv files)
Or with relatively small amounts of files which I cleaned manually(it wasn't an issue)
In my new job I am getting many not clean files, examples are: plain text files (not csv) and excel files the opposite of the mentioned above.
My problem is that these files are many and going through every file is upsetting (opening cleaning manually and trying to make any sense of the data within) so finally I can load it to an integration service tool (ssis, Informatica) and then to a Viz tool through a data warehouse.
Viz tool like Tableau desktop can't clean them appropriately with the automatic interpretation (it takes only the main tables and ignore the others with these not clean files)
I am sure someone worked with these things, your help would be appreciated!
How to deal with these situations?

Special Characters and Datastage Transformer issues

Hey guys I work as a DB Consultant, I am new to the DataStage software and have been running into some issues. I need to load .xls files to .txt files and then upload them, but I am running into issues with the special characters below. I am not sure what DataType or syntax I could use to upload the data to the DB as the job crashes every time it encounters anything other than
Á,',’,Ç,Í,`,É, * etc
What attempts have we made?
**Convert(Convert('ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789 ','', Column_Name1),'',Column_Name1)
What issues persist?
Á,',’,Ç,Í,`,É and Spanish tildes
Question?
Is there not a way to bring in the data as is? I do not want the Convert statement because that only works on special chars but when accents and German Umlauts are involved that is a letter, and I don't want to replace with a space or remove it altogether, how do I handle this error?
Check out the unstructured data stage
The Knowledge Center provides a lot of nformation how to design a job with an Excel as data source.
In addition to that there are blogs and vidoes out there as well.
Once you understood the design / interface of the stage it is not difficult to do.

Best way to report the growth of a file using powershell?

I would like to report the database size to myself via email every week and make a comparison to the week before and display the growth in Megabyte and/or %.
I have everything besides the comparison done.
Imagine this setup :
SQL server with 100 databases
Now there are plenty of ways to do a comparison, I thought about writing the sizes into XML by powershell and later read out using a second script and report to me.
Since I trained myself in powershell I might have gaps here, so I am afraid to miss an easy way.
Does anyone has a nice Idea of how to compare the size?
The report and calculation I will manage myself later, I just need a good way to do that.
Currently I am on Powershell 3.0 but I can upgrade to 4.0
Don't invent the wheel again. Sql Server already has tools to monitor DB file sizes. So does Performance Monitor. There are several 3rd party products available too. Ask your local DBA if there already is such a system present.
A common practice is to query the server for DB file sizes on, say, daily basis and store it in utility db table with timestamp. Calculating change volumes, ratios and whatnot can be done on TSQL side. (Not that it is CPU intensive anyway.)
I would creat foreach database an csv file. then write out two rows:
Date,Size
27.08.2014,1024
28.08.2014,1040
29.08.2014,1080
Then you can import the csv file, sort the row by date, compare the last two sizes and send the result by mail.

How to load data into Core Data?

thanks for you help.
I'm attempting to add core data to my project and I'm stuck at where and how to add the actual data into the persistent store (I'm assuming this is the place for the raw data).
I will have 1000 < objects so I don't want to use a plist approach. From my searches, there seems to be xml and csv approaches. Is there a way I can use SQL for input?
The data will not be changed by the user and the data file will be typed in by hand, so I won't need to update these files during runtime, and at this point I am not limited in any type of file - the lightest on syntax is preferred.
Thanks again for any help.
You could load your data from an xml/csv/json file and create the DB on the first lunch of your application (if the DB is not there, then read the data and create it).
A better/faster approach might be to ship your sqllite DB within your application. You can parse the file in any format you want on the simulator, create a DB with all your entities, then take it from the ApplicationData and just add it to your app as a resource.
Although I'm sure there are lighter file types that could be used, I would include a JSON file into the app bundle from which you import the initial dataset.
Update: some folks are recommending XML. NSXMLParser is almost as fast as JSONKit (but much faster than most other parsers), but the XML syntax is heavier than JSON. So an XML bundled file that holds the initial dataset would weight more than if it was in JSON.
Considering Apple considers the format of its persistent stores implementation details, shipping a prefabricated SQLite database is not a very good idea. I.e. the names of fields and tables may change between iOS versions/phones/whatever hidden variable you can think of. You should, in general, not concern yourself with how this serialization of your data is formatted.
There's a brief article about importing data on Apple's developer site: Efficiently Importing Data
You should ship initial data in whatever format you're comfortable with (XML allows you to do incremental parsing efficiently, which reduces memory footprint) and write an import routine to run if you need to import data.
Edit: With EliBud's comment in mind, I still consider the approach a bit "iffy"... The format of the SQLite database used by Core Data is not something you'd want to generate by yourself (it's weird, simply put, and still not something you should really rely on).
So you'd want to use a mock app running on the Simulator and use Core Data to create the database (as per EliBud's answer). But you'd still have to import the data into that mock-app! And while it might make sense to do this once on a "real" computer instead of a lot of times on a mobile device (i.e. copying a file is easy, importing data is hard), you're essentially using the Simulator as an administration tool.
But hey, if it works...

How can I create a web page that shows aggregate data from Sawtooth surveys?

I'm guessing this won't apply to 99.99% of anyone that sees this. I've been doing some Sawtooth survey programming at work and I've been needing to create a webpage that shows some aggregate data from the completed surveys. I was just wondering if anyone else has done this using the flat files that Sawtooth generates and how you went about doing it. I only know very basic Perl and the server I use does not have PHP so I'm somewhat at a loss for solutions. Anything you've got would be helpful.
Edit: The problem with offering example files is that it's more complicated. It's not a single file and it occasionally gets moved to a different file with a different format. The complexities added in there are why I ask this question.
Doesn't Sawtooth export into CSV format? There are many Perl parsers for CSV files. Just about every language has a CSV parser or two (or twelve), and MS Excel can open them directly, and they're still plaintext so you can look at them in any text editor.
I know our version of Sawtooth at work (which is admittedly very old) exports Sawtooth data into SPSS format, which can then be exported into various spreadsheet formats including CSV, if all else fails.
If you have a flat (fixed-width field) file, you can easily parse it in Perl using regular expressions or just taking substrings of each line one at a time, assuming you know the width of the fields. Your question is too general to give much better advice, sorry.
Matching the values up from a plaintext file with meta-data (variable names and labels, value labels etc.) is more complicated unless you already have the meta-data in some script-readable format. Making all of that stuff available on a web page is more complicated still. I've done it and it can be a bit of a lengthy project to roll your own. There are packages you can buy, like SDA, which will help you build a website where people can browse and download your survey data and view your codebooks.
Honestly though the easiest thing to do if you're posting statistical data on a website is get the data into SPSS or SAS or another statistics package format and post those files for download directly. Then you don't have to worry about it.