GApplication OptionArgFunc callback with user data, as part of main option group? - gtk3

Writing a Gtk (3.0, for the moment) application's main() function and installing the standard GLib/Gio GApplication scaffolding, I've hit a snag when it comes to command-line argument processing.
I'm already convinced I don't want to handle the commandline myself, because that's exactly what the toolkits are supposed to be there for. At least, in theory. So the command line is set up by defining a GOptionEntry array and passing it to g_application_add_main_option_entries() in the usual fashion:
GOptionEntry options[] =
{
{ "verbose", 'v', 0,
G_OPTION_ARG_NONE, &opt_verbose,
"Be verbose", NULL },
{ "otherarg", 'o', 0,
G_OPTION_ARG_STRING, &opt_other,
"Secret", NULL },
{ NULL }
};
app = gtk_application_new("org.me.myapp", G_APPLICATION_NON_UNIQUE);
g_application_add_main_option_entries(G_APPLICATION(app), options);
So far, so good. But now I want to include a --debug flag, and I'd like to handle that with an OptionArgFunc callback so that it can modify the G_MESSAGES_DEBUG environment variable to append the G_LOG_DOMAIN when debugging is enabled. (I'd rather not write my own log handling, either.)
That part's also easy enough, using an argument-processing callback:
gboolean
enable_debug (const gchar* option_name, const gchar* value,
gpointer data, GError** error)
{
// ...
}
GOptionEntry options[] =
{
{ "verbose", 'v', 0,
G_OPTION_ARG_NONE, &opt_verbose,
"Be verbose", NULL },
{ "otherarg", 'o', 0,
G_OPTION_ARG_STRING, &opt_other,
"Secret", NULL },
{ "debug", 'd', G_OPTION_FLAG_NO_ARG,
G_OPTION_ARG_CALLBACK, &enable_debug,
"Enable debug output", NULL },
{ NULL }
};
But, now say I want to pass some user data to enable_debug() — actually have that gpointer data set to something other than NULL.
According to the docs, data carries...
User data added to the GOptionGroup containing the option when it was created with g_option_group_new()
But I'm not using g_option_group_new().
I'd be happy to use g_option_group_new(), but AFAICT there's no way to create the main GApplication option group "by hand" (by calling g_option_group_new()), it's only created implicitly by calling g_application_add_main_option_entries() which doesn't allow user data to be attached to the group.
And if I were to create a new GOptionGroup for the --debug option, using g_option_group_new() followed by g_application_add_option_group(), --debug would be relegated to a different option group than the rest of my arguments, and it wouldn't be shown in the main --help output — the user would have to run myapp --help-debugging (or whatever I named it) in order to see the --debug flag in the argument synopsis.
Do I have all that right? Are GOptionArgFunc callbacks that take user data really mutually exclusive with arguments that are part of the application's main option group? Is there a reason why that would be the case, or is it just a weird hole in the API?

Related

Azure Mobile Services for Xamarin Forms - Conflict Resolution

I'm supporting a production Xamarin Forms app with offline sync feature implemented using Azure Mobile Services.
We have a lot of production issues related to users losing data or general instability that goes away if the reinstall the app. After having a look through, I think the issues are around how the conflict resolution is handled in the app.
For every entity that tries to sync we handle MobileServicePushFailedException and then traverse through the errors returned and take action.
catch (MobileServicePushFailedException ex)
{
foreach (var error in ex.PushResult.Errors) // These are MobileServiceTableOpearationErrors
{
var status = error.Status; // HttpStatus code returned
// Take Action based on this status
// If its 409 or 412, we go in to conflict resolving and tries to decide whether the client or server version wins
}
}
The conflict resolving seems too custom to me and I'm checking to see whether there are general guidelines.
For example, we seem to be getting empty values for 'CreatedAt' & 'UpdatedAt' timestamps for local and server versions of the entities returned, which is weird.
var serverItem = error.Result;
var clientItem = error.Item;
// sometimes serverItem.UpdatedAt or clientItem.UpdatedAt is NULL. Since we use these 2 fields to determine who wins, we are stumped here
If anyone can point me to some guideline or sample code on how these conflicts should be generally handled using information from the MobileServiceTableOperationError, that will be highly appreciated
I came across the following code snippet from the following doc.
// Simple error/conflict handling.
if (syncErrors != null)
{
foreach (var error in syncErrors)
{
if (error.OperationKind == MobileServiceTableOperationKind.Update && error.Result != null)
{
//Update failed, reverting to server's copy.
await error.CancelAndUpdateItemAsync(error.Result);
}
else
{
// Discard local change.
await error.CancelAndDiscardItemAsync();
}
Debug.WriteLine(#"Error executing sync operation. Item: {0} ({1}). Operation discarded.",
error.TableName, error.Item["id"]);
}
}
Surfacing conflicts to the UI I found in this doc
private async Task ResolveConflict(TodoItem localItem, TodoItem serverItem)
{
//Ask user to choose the resolution between versions
MessageDialog msgDialog = new MessageDialog(
String.Format("Server Text: \"{0}\" \nLocal Text: \"{1}\"\n",
serverItem.Text, localItem.Text),
"CONFLICT DETECTED - Select a resolution:");
UICommand localBtn = new UICommand("Commit Local Text");
UICommand ServerBtn = new UICommand("Leave Server Text");
msgDialog.Commands.Add(localBtn);
msgDialog.Commands.Add(ServerBtn);
localBtn.Invoked = async (IUICommand command) =>
{
// To resolve the conflict, update the version of the item being committed. Otherwise, you will keep
// catching a MobileServicePreConditionFailedException.
localItem.Version = serverItem.Version;
// Updating recursively here just in case another change happened while the user was making a decision
UpdateToDoItem(localItem);
};
ServerBtn.Invoked = async (IUICommand command) =>
{
RefreshTodoItems();
};
await msgDialog.ShowAsync();
}
I hope this helps provide some direction. Although the Azure Mobile docs have been deprecated, the SDK hasn't changed and should still be relevant. If this doesn't help, let me know what you're using for a backend store.

How can I extend the window reuse code paths in vscode and Azure Data Studio?

I am working in the microsoft/azuredatastudio github repo, which is largely forked from vscode. I am trying to extend our command line processing to handle the window reuse parameter such that if we pass a server connection along with -r that we will open the specified connection. Our current command line processing service is loaded by src\vs\workbench\electron-browser\workbench.ts in Workbench.initServices.
Is there any platform-provided service that is visible to both electron-main and workbench\electron-browser that I could modify or leverage to be informed of the app being reused with new command line arguments?
I've found that the LaunchService defined in src\vs\code\electron-main\launch.ts appears to be responsible for capturing the arguments and opening or reusing the window, but it's not clear how I would marshal a notification from the LaunchService over to our services that are loaded by workbench.
2/12/2019 update:
It looks like I need to add an equivalent of this function in src\vs\code\electron-main\windows.ts
private doOpenFilesInExistingWindow(configuration: IOpenConfiguration, window: ICodeWindow, filesToOpen: IPath[], filesToCreate: IPath[], filesToDiff: IPath[], filesToWait: IPathsToWaitFor): ICodeWindow {
window.focus(); // make sure window has focus
window.ready().then(readyWindow => {
const termProgram = configuration.userEnv ? configuration.userEnv['TERM_PROGRAM'] : void 0;
readyWindow.send('vscode:openFiles', { filesToOpen, filesToCreate, filesToDiff, filesToWait, termProgram });
});
return window;
}
which has a new message like 'ads:openconnection' . Now to find out how to handle the message.
I ended up using ipcRenderer service and adding an ipc call to the launch service in main.
// {{SQL CARBON EDIT}}
// give the first used window a chance to process the other command line arguments
if (args['reuse-window'] && usedWindows.length > 0 && usedWindows[0])
{
let window = usedWindows[0];
usedWindows[0].ready().then(() => window.send('ads:processCommandLine', args));
}
// {{SQL CARBON EDIT}}

What impact does changing a IReliableQueue to a IReliableConcurrentQueue have in an existing deployment?

I am working in a Service Fabric application that uses IReliableQueue. For the uses cases of this system, the IReliableConcurrentQueue makes sense to use and some local testing (i.e. basically by just changing the code to use IReliableConcurrentQueue instead of IReliableQueue - queue name does not change) shows great performance improvements. However, I am worried about the impact of changing this in a production system (i.e. upgrading). I can't find any docs or online questions (unless I just missed them) about these considerations. For example, in this system, the existing IReliableQueue will almost always have items. So what happens to that data when I upgrade the SF application? Will it be available to dequeue in the IReliableConcurrentQueue? Or would data be lost? I know I can "just try it" but wanted to see if someone out there had done the same or could offer pointers to existing resources. Thanks!
Sorry for a late answer (that you probably don't need anymore but still).
When we calling GetOrAddAsync method on IReliableStateManager we aren't retrieving the interface to store values - we actually creating an instance of reliable collection. This basically means that type of the interface we specify is very important.
Taking this into account if we do this:
Service v. 1.0
// Somewhere in RunAsync for example
await this.StateManager.GetOrAddAsync<IReliableQueue<long>>("MyCollection")
Then doing this in the next version:
Service v. 1.1
// Somewhere in RunAsync for example
await this.StateManager.GetOrAddAsync<IReliableConcurrentQueue<long>>("MyCollection")
will throw an exception:
Returned reliable object of type Microsoft.ServiceFabric.Data.Collections.DistributedQueue`1[System.Int64] cannot be casted to requested type Microsoft.ServiceFabric.Data.Collections.IReliableConcurrentQueue`1[System.Int64]
and then:
System.ExecutionEngineException: 'Exception of type 'System.ExecutionEngineException' was thrown.'
The above exception looks like a bug so I have filled one.
UPDATE 2019.06.28
It turned out that appearance of System.ExecutionEngineException isn't a bug but rather an undocumented behavior of Environment.FailFast method in combination with Visual Studio debugger.
Please see my comment to the above issue.
This is what would happen.
There are plenty ways to overcome this.
Here is the most obvious one:
Example
var migrate = false; // This flag indicates whether the migration was already done.
var migrateValues = new List<long>();
var applicationFlags = await this.StateManager
.GetOrAddAsync<IReliableDictionary<string, bool>>("application-flags");
using (var transaction = this.StateManager.CreateTransaction())
{
var flag = await applicationFlags
.TryGetValueAsync(transaction, "queue-to-concurrent-queue-migration");
if (!flag.HasValue || !flag.Value)
{
var queue = await this.StateManager
.GetOrAddAsync<IReliableQueue<long>>("value-collection");
for (;;)
{
var c = await queue.TryDequeueAsync(transaction);
if (!c.HasValue)
{
break;
}
migrateValues.Add(c.Value);
}
migrate = true;
}
}
if (migrate)
{
await this.StateManager.RemoveAsync("value-collection");
using (var transaction = this.StateManager.CreateTransaction())
{
var concurrentQueue = await this.StateManager
.GetOrAddAsync<IReliableConcurrentQueue<long>>("value-collection");
foreach (var i in migrateValues)
{
await concurrentQueue.EnqueueAsync(transaction, i);
}
await applicationFlags.AddOrUpdateAsync(
transaction,
"queue-to-concurrent-queue-migration",
true,
(s, b) => true);
}
await transaction.CommitAsync();
}
Please note that this code is just an illustrative example and should be properly tested before applying it to real life application.

How to add an import to the file with Babel

Say you have a file with:
AddReactImport();
And the plugin:
export default function ({types: t }) {
return {
visitor: {
CallExpression(p) {
if (p.node.callee.name === "AddReactImport") {
// add import if it's not there
}
}
}
};
}
How do you add import React from 'react'; at the top of the file/tree if it's not there already.
I think more important than the answer is how you find out how to do it. Please tell me because I'm having a hard time finding info sources on how to develop Babel plugins. My sources right now are: Plugin Handbook,Babel Types, AST Spec, this blog post, and the AST explorer. It feels like using an English-German dictionary to try to speak German.
export default function ({types: t }) {
return {
visitor: {
Program(path) {
const identifier = t.identifier('React');
const importDefaultSpecifier = t.importDefaultSpecifier(identifier);
const importDeclaration = t.importDeclaration([importDefaultSpecifier], t.stringLiteral('react'));
path.unshiftContainer('body', importDeclaration);
}
}
};
}
If you want to inject code, just use #babel/template to generate the AST node for it; then inject it as you need to.
Preamble: Babel documentation is not the best
I also agree that, even in 2020, information is sparse. I am getting most of my info by actually working through the babel source code, looking at all the tools (types, traverse, path, code-frame etc...), the helpers they use, existing plugins (e.g. istanbul to learn a bit about basic instrumentation in JS), the webpack babel-loader and more...
For example: unshiftContainer (and actually, babel-traverse in general) has no official documentation, but you can find it's source code here (fascinatingly enough, it accepts either a single node or an array of nodes!)
Strategy #1 (updated version)
In this particular case, I would:
Create a #babel/template
prepare that AST once at the start of my plugin
inject it into Program (i.e. the root path) once, only if the particular function call has been found
NOTE: Templates also support variables. Very useful if you want to wrap existing nodes or want to produce slight variations of the same code, depending on context.
Code (using Strategy #1)
import template from "#babel/template";
// template
const buildImport = template(`
import React from 'react';
`);
// plugin
const plugin = function () {
const importDeclaration = buildImport();
let imported = false;
let root;
return {
visitor: {
Program(path) {
root = path;
},
CallExpression(path) {
if (!imported && path.node.callee.name === "AddMyImport") {
// add import if it's not there
imported = true;
root.unshiftContainer('body', importDeclaration);
}
}
}
};
};
Strategy #2 (old version)
An alternative is:
use a utility function to generate an AST from source (parseSource)
prepare that AST once at the start of my plugin
inject it into Program (i.e. the root path) once, only if the particular function call has been found
Code (using Strategy #2)
Same as above but with your own compiler function (not as efficient as #babel/template):
/**
* Helper: Generate AST from source through `#babel/parser`.
* Copied from somewhere... I think it was `#babel/traverse`
* #param {*} source
*/
export function parseSource(source) {
let ast;
try {
source = `${source}`;
ast = parse(source);
} catch (err) {
const loc = err.loc;
if (loc) {
err.message +=
"\n" +
codeFrameColumns(source, {
start: {
line: loc.line,
column: loc.column + 1,
},
});
}
throw err;
}
const nodes = ast.program.body;
nodes.forEach(n => traverse.removeProperties(n));
return nodes;
}
Possible Pitfalls
When a new node is injected/replaced etc, babel will run all plugins on them again. This is why your first instrumentation plugin is likely to encounter an infinite loop right of the bet: you want to remember and not re-visit previously visited nodes (I'm using a Set for that).
It gets worse when wrapping nodes. Nodes wrapped (e.g. with #babel/template) are actually copies, not the original node. In that case, you want to remember that it is instrumented and skip it in case you come across it again, or, again: infinite loop 💥!
If you don't want to instrument nodes that have been emitted by any plugin (not just yours), that is you want to only operate on the original source code, you can skip them by checking whether they have a loc property (injected nodes usually do not have a loc property).
In your case, you are trying to add an import statement which won't always work without the right plugins enabled or without program-type set to module.
I believe there's an even better way now: babel-helper-module-imports
For you the code would be
import { addDefault } from "#babel/helper-module-imports";
addDefault(path, 'React', { nameHint: "React" })

Passing Data to a Queue Class Using Laravel 4.1 and Beanstalkd

I have never setup up a queueing system before. I decided to give it a shot. It seems the queueing system is working perfectly. However, it doesn't seem the data is being sent correctly. Here is my code.
...
$comment = new Comment(Input::all());
$comment->user_id = $user->id;
$comment->save();
if ($comment->isSaved())
{
$voters = $comment->argument->voters->unique()->toArray();
Queue::push('Queues\NewComment',
[
'comment' => $comment->load('argument', 'user')->toArray(),
'voters' => $voters
]
);
return Response::json(['success' => true, 'comment' => $comment->load('user')->toArray()]);
}
...
The class that handles this looks like this:
class NewComment {
public function fire($job, $data)
{
$comment = $data['comment'];
$voters = $data['voters'];
Log::info($data);
foreach ($voters as $voter)
{
if ($voter['id'] != $comment['user_id'])
{
$mailer = new NewCommentMailer($voter, $comment);
$mailer->send();
}
}
$job->delete();
}
}
This works beautifully on my local server using the synchronous queue driver. However, on my production server, I'm using Beanstalkd. The queue is firing like it is supposed to. However, I'm getting an error like this:
[2013-12-19 10:25:02] production.ERROR: exception 'ErrorException' with message 'Undefined index: voters' in /var/www/mywebsite/app/queues/NewComment.php:10
If I print out the $data variable passed into the NewComment queue handler, I get this:
[2013-12-19 10:28:05] production.INFO: {"comment":{"incrementing":true,"timestamps":true,"exists":true}} [] []
I have no clue why this is happening. Anyone have an idea how to fix this.
So $voters apparently isn't being put into the queue as part of the payload. I'd build the payload array outside of the Queue::push() function, log the contents, and see exactly what is being put in.
I've found if you aren't getting something out that you expect, chances are, it's not being put in like you expect either.
While you are at it, make sure that the beanstalkd system hasn't got old data stuck in it that is incorrect. You could add a timestamp into the payload to help make sure it's the latest data, and arrange to delete or bury any jobs that don't have the appropriate information - checked before you start to process them. Just looking at a count of items in the beanstalkd tubes should make it plain if there are stuck jobs.
I've not done anything with Laravel, but I have written many tasks for other Beanstalkd and SQS-backed systems, and the hard part is when the job fails, and you have to figure out what went wrong, and how to avoid just redoing the same failure over and over again.
What I ended up doing was sticking with simple numbers. I only stored the comment's ID on the queue and then did all the processing in my queue handler class. That was the easiest way to do it.
You will get data as expected in the handler by wrapping the data in an array:
array(
array('comment' => $comment->load('argument', 'user')->toArray(),
'voters' => $voters
)
)