Azure Logic App blob added or modified trigger not working correctly when app is cloned - triggers

I have a Azure blobstorage with 3 different containers, lets call them container-a, container-b and container-c where more or less frequently data is uploaded as txt files.
I then created a logic app with When a blob is added or modified trigger and connected it to to container-a - worked like a charme.
So i cloned the LogicApp and then connected them with the according blobs container-b or rather container-c, but the trigger is fired with blobs which were added to container-a in both clones.
I checked all the Triggers settings, but everything looks quite okay to me.
FYI:
I edited the question, since it only seemed to occur with my cloned Logic Apps using that trigger.
I have to proof if i can recap the issue

I found the error - it is definitely a bug:
When selecting a new container for an existing blobstorage
connection, this container is saved, but in the background (to be
checked in the Logic app code view option under definition -> triggers -> metadata) this container is created incorrectly -
because twice.
This apparently seems to result in simply using the first
container (in an alphanumeric order) of the connected blobstorage.
Deleting this duplicate entry mentioned above in the code view did something for me in one case, but not in the other.
In the end, the best thing to do is to
completely reconnect the blobstorage in the Trigger step of the ´Logic Apps Designer` and then select the desired container there again.

Related

ADF doesnt create BlobCreation event with dataflow

I have pipeline and a dataflow activity inside which copies the data to blob storage. I have trigger activated.
Problem is, the trigger works If I place the file manually on storage. But it doesn't get triggered when the dataflow puts file on the blob storage with copy activity.
Here is the trigger info:
The problem is that a sink in dataflow when using parquet format generates a BlobRenamed event instead of BlobCreation. Therefore, the trigger doesn't get the right event.
I tried and it's working fine for me. it is detecting blob is getting added in particular container.
It appears that the trigger is set up to only react when new files are added to the blob storage and not when old files change. It's possible that the dataflow activity updates an existing file rather than producing a new one when it moves data to the blob storage.
Agreed with #Joel Cochran in Blob path ends with you are passing train_data.parquet if trigger did not find any particular file with name contain similar pattern it will not trigger the pipeline.
You may tweak the trigger to look for changes in both new and current files to fix this. This may be achieved by including the only .parquet in the Blob path ends with section of the trigger setup, which will make the trigger react to any changes to files in the supplied path.
Specify the correct details.
It's possible that the trigger is not configured to detect changes made by the dataflow activity. Check the trigger's settings to ensure that it is monitoring the correct blob container and that it is set up to detect the appropriate types of changes, such as new or modified blobs.
I ended up using web activity to send custom blob events to custom events and using custom triggers on the receiving pipeline.

Setting up a BigQuery to Google Cloud Storage pipeline with overwriting

I am trying to setup a really simple pipeline in Data Fusion which takes a table from BigQuery, then stores that data into Google Cloud Storage. With the pipeline setup below it's fairly easy. We first read the bigquery table and schema, then sink the data into a Google Cloud Storage bucket. This works, but the problem is that a new map and a new file gets created for each new transfer that I run. What I would like to do is to overwrite a single file in the same filepath with each new transfer that I do.
What I ran into that in this setup, a new map and a new file gets within Google Cloud Storage created using a timestamp prefix. Looking at the sink configuration below, indeed, by default you see a timestamp.
Alright, that would mean if I would remove the prefix a new map shouldn't be created. The hover-over confirms this: "If not specified, nothing will be appended to the path".
However, when I clear this value and then save it, the full time format automatically pops up again. I can't use a static value because this results in errors. For example I just tried creating a map with the number "12" in Google Cloud Storage and then setting the prefix to that, but as you would guess this doesn't work. Is anyone else running into this problem? How do I get rid of the path suffix so I don't get a new map for each timestamp within Google Cloud Storage?
This seems to be an issue with Data Fusion UI. Have filed a JIRA for this https://issues.cask.co/browse/CDAP-16129.
I understand this can be confusing when you open the configuration again. The reason this is happening is whenever you open the configuraion modal we pre-populate fields with default values from plugin widget json (if no value is present).
As a workaround can you try,
Export pipeline - Once you have configured all the properties in the plugins you can export the pipeline. This step should download a JSON for you where you can locate the property and remove it and import the pipeline and publish without opening the specific plugin.
Or, simply remove the property from the plugin configuration modal and close and publish the pipeline directly. UI will Re-populate the value every time you open the plugin configuration. Once you delete and close the modal it should retain that state until you open the configuration again.
Hope this helps.

Swift OS X 10.11 Cannot open SQLite3 database after a second app execution

I use pure commands of the sqlite3 library,,, the first time you install the app, a method executes the sqlite3_open() method. It supposes to create the database. It actually creates it in a user folder (desktop folder in the mac os x) as showed in the log screen. After this step, it creates 2 tables and saves some data, and it completes this, with success.
the second time you run the app, it intents to open the database with the same method sqlite3_open(), but it presents the error showed in the image with code number 14.
After that, I made some research and found that the new version of sqlite uses 3 files (.sqlite, .sqlite-wal and .sqlite-shm)... After reading that, I started searching on how to create those 2 additional files at the moment of creating the first file (the .sqlite file)... But I only found that all the tutorials copy those 3 files (previously created) to the references folder on the project, but they don't create it.
Continuing my search, found that there is an option to change the configuration of the sqlite in my app, to prevent using this wal option... I had to execute the command SQLITE_FCNTL_PRAGMA (maybe this is not used like I'm doing).
Please if you need more info that may help solving this issue please just let me know.
image of Class method that opens/creates the Daatabase
Edit: screenshot with the extended errcode resulting on error 14 with no more details.
imglink

Listening to the HTML5 file system in a Chrome Application

I am working on a Google Chrome App which reads from and writes to the sandboxed local file system.
I am accessing the file system by invoking window.webkitRequestFileSystem || window.requestFileSystem
This is a large application, and I have some code components creating and deleting files (call them the producers), and other code components displaying the files (the consumers).
For clean separation of code, I don't want the producers and consumers to know about one another. I would like the consumers to simply watch the file system, and react appropriately when files are created or modified.
Sadly, it appears that the framework provides no way to add a listener to the local file system.
Am I correct in saying that?
It looks like this is in the works and may land within the next few months. See relevant issue tracker

sqlite DB to-do during iphone app update

I have some general questions about iphone app updates that involves sqlite db.
With the new update does the existing sqlite db get overwritten with a copy of the new one?
If the update doesn't involve any schema changes then the user should be able to reuse the existing database with their saved data, right? (if the existing database doesn't get overwritten from 1 above )
If there are some schema changes, what's the best way to transfer data from the old database into the new one? Can some one please give me guidelines and sample code?
Only files inside the app bundle are replaced. If the database file is in your app's Documents directory, it will not be replaced. (Note that if you change files inside your app bundle, the code signature will no longer be valid, and the app will not launch. So unless you are using a read-only database, it would have to be in the Documents directory.)
Yes.
What's best depends on the data. You're not going to find sample code for such a generic question. First, you need to detect that your app is running with an old DB version. Then you need to upgrade it.
To check versions:
You could use a different file name for the new schema. If Version2.db does not exist but Version1.db does, do an upgrade.
You could embed a schema version in your database. I have a table called metadata with a name and value column. I use that to store some general values, including a dataversion number. I check that number when I open the database, and if it is less than the current version, I do an upgrade.
Instead of creating a table, you could also use sqlite's built-in user_version pragma to check and store a version number.
You could check the table structure directly: look for the existence of a column or table.
To upgrade:
You could upgrade in place by using a series of SQL commands. You could even store a SQL file inside your app bundle as a resource and simply pass it along to sqlite3_exec to do all the work. (Do this inside a transaction, in case there is a problem!)
You could upgrade by copying data from one database file to a new one.
If your upgrade may run a long time (more than one second), you should display an upgrading screen, to explain to the user what is going on.
1) The database file isn't stored as part of the app bundle so no, it won't get automatically overwritten.
2) Yes - all their data will be saved. In fact, the database won't get touched at all by the update.
3) This is the tricky one - read this fantastically interesting document - especially the part on lightweight migration - if your schema changes are small and follow a certain set of rules, they will happen automatically and the user won't notice. however, if ther are major changes to the schema you will have to write your own migration code (that's in that links as well)
I've always managed to get away with running lightweight migrations myself - it's by far easier than doing it yourself.
What I do is that I create a working copy of the database in the Documents directory. The main copy comes with the bundle. When I update the app I then have the option to make a new copy over the working copy, or leave it.