BBOS 10 File picker not returning signals properly - blackberry-10

I implemented a native File Picker on BlackBerry 10, after a bit of messing around it finally recognised the class, it opens fine and returns the file Address on the console but it looks like two signals are not working properly, baring in mind this is pretty much a straight copy of code from BlackBerry 10 docs.
using namespace bb::cascades::pickers;
void Utils::getFile() const{
FilePicker* filePicker = new FilePicker();
filePicker->setType(FileType::Music);
filePicker->setTitle("Select Sound");
filePicker->setMode(FilePickerMode::Picker);
filePicker->open();
// Connect the fileSelected() signal with the slot.
QObject::connect(filePicker,
SIGNAL(fileSelected(const QStringList&)),
this,
SLOT(onFileSelected(const QStringList&)));
// Connect the canceled() signal with the slot.
QObject::connect(filePicker,
SIGNAL(canceled()),
this,
SLOT(onCanceled()));
}
I wanted it to return the file url to qml with this (works fine with QFileDialog but that wouldn't recognise on my SDK) var test=utils.getFile()
if(test=="") console.debug("empty")
else console.debug(test)
But I'm getting these messages from the console: Object::connect: No such slot Utils::onFileSelected(const QStringList&) in ../src/Utils.cpp:27
Object::connect: No such slot Utils::onCanceled() in ../src/Utils.cpp:33
It is returning undefined from the else in the qml function when it opens,
Does anyone know where I cocked up or how I could get QFileDialog class to be found by the SDK?

I just wanted to give you a bit of an explanation in case you're still having some troubles. The concept's in Qt were a little foreign to me when I started in on it as well.
There are a couple ways you can do this. The easiest would probably be the pure QML route:
import bb.cascades 1.2
import bb.cascades.pickers 1.0
Page {
attachedObjects: [
FilePicker {
id: filePicker
type: FileType.Other
onFileSelected: {
console.log("selected files: " + selectedFiles)
}
}
]
Container {
layout: DockLayout {
}
Button {
id: launchFilePicker
text: qsTr("Open FilePicker")
onClicked: {
filePicker.open();
}
}
}
}
When you click the launchFilePicker button, it will invoke a FilePicker. Once a file is selected, the fileSelected signal will be fired. The slot in this case is the onFileSelected function (predefined), which logs the filepaths of the files that were selected (a parameter from the signal) to the console.
The C++ route is a little more work, but still doable.
If your class file was called Util, then you'd have a Util.h that looks something like this:
#ifndef UTIL_H_
#define UTIL_H_
#include <QObject>
class QStringList;
class Util : public QObject
{
Q_OBJECT
public:
Util(QObject *parent = 0);
Q_INVOKABLE
void getFile() const;
private Q_SLOTS:
void onFileSelected(const QStringList&);
void onCanceled();
};
#endif /* UTIL_H_ */
Note the Q_INVOKABLE getFile() method. Q_INVOKABLE will eventually allow us to call this method directly from QML.
The corresponding Util.cpp would look like:
#include "Util.h"
#include <QDebug>
#include <QStringList>
#include <bb/cascades/pickers/FilePicker>
using namespace bb::cascades;
using namespace bb::cascades::pickers;
Util::Util(QObject *parent) : QObject(parent)
{
}
void Util::getFile() const
{
FilePicker* filePicker = new FilePicker();
filePicker->setType(FileType::Other);
filePicker->setTitle("Select a file");
filePicker->setMode(FilePickerMode::Picker);
filePicker->open();
QObject::connect(
filePicker,
SIGNAL(fileSelected(const QStringList&)),
this,
SLOT(onFileSelected(const QStringList&)));
QObject::connect(
filePicker,
SIGNAL(canceled()),
this,
SLOT(onCanceled()));
}
void Util::onFileSelected(const QStringList &stringList)
{
qDebug() << "selected files: " << stringList;
}
void Util::onCanceled()
{
qDebug() << "onCanceled";
}
To make your Q_INVOKABLE getFile() method available to QML, you'd need to create an instance and set it as a ContextProperty. I do so in my applicationui.cpp like so:
Util *util = new Util(app);
QmlDocument *qml = QmlDocument::create("asset:///main.qml").parent(this);
qml->setContextProperty("_util", util);
Then, you can call this Q_INVOKABLE getFile() method from QML:
Page {
Container {
layout: DockLayout {}
Button {
id: launchFilePicker
text: qsTr("Open FilePicker")
onClicked: {
_util.getFile();
}
}
}
}
Like Richard says, most of the documentation covers how to create signals/slots, so you could review that, but also have a look at some Cascades-Samples on Git.
Hope that helps!!!

Related

Can You Minimize the Soft Keyboard on Android From Text Completed Event

I've seen various answers to this question for older versions but not sure how to translate to MAUI. The question being, is there a way that you can minimize the soft keyboard on a device from the Text Completed event of an Entry control?
I finally figured out how to do this. This solution is for Android only right now. It doesn't use a custom handler since I could not get the window token from PlatformView. Instead the code looks like this:
#if ANDROID
var imm = (Android.Views.InputMethods.InputMethodManager)MauiApplication.Current.GetSystemService(Android.Content.Context.InputMethodService);
if (imm != null)
{
//this stuff came from here: https://www.syncfusion.com/kb/12559/how-to-hide-the-keyboard-when-scrolling-in-xamarin-forms-listview-sflistview
var activity = Microsoft.Maui.ApplicationModel.Platform.CurrentActivity;
Android.OS.IBinder wToken = activity.CurrentFocus?.WindowToken;
imm.HideSoftInputFromWindow(wToken, 0);
}
#endif
So credit to the syncfusion folks that published their version, and this code above is modified from that to work in MAUI.
The code belongs in a custom handler. Based on Customize a control with a mapper.
In that Maui handler, handler.PlatformView is the Android control. Xamarin.Android properties/methods would be on that.
Something like:
using Microsoft.Maui.Platform;
namespace CustomizeHandlersDemo;
public partial class CustomizeEntryPage : ContentPage
{
public CustomizeEntryPage()
{
InitializeComponent();
ModifyEntry();
}
void ModifyEntry()
{
Microsoft.Maui.Handlers.EntryHandler.Mapper.AppendToMapping(
"MyCustomization", (handler, view) =>
{
#if ANDROID
handler.PlatformView....
#elif IOS
#elif WINDOWS
#endif
});
}
}
NOTE: That example modifies ALL Entries.
If you want to modify only SOME Entries, you instead define a subclass (e.g. public class MyEntry : Entry {}), and do this:
Microsoft.Maui.Handlers.EntryHandler.Mapper.AppendToMapping(
"MyEntryCustomizationOrWhatever", (handler, view) =>
{
if (view is MyEntry)
{
#if ANDROID
handler.PlatformView....
#elif IOS
#elif WINDOWS
#endif
}
});
For your specific situation, the line you were having trouble adapting to Maui contains btnSignIn.WindowToken.
Replace that with handler.PlatformView.WindowToken.

Listen to keyboard input in the whole Blazor page

I'm trying to implement a Blazor app that listens to keyboard input all the time (some kind of full screen game, let's say).
I can think of a key down event listener as a possible implementation for it, since there's not really an input field to auto-focus on.
Is there a better solution to just react to key-presses in any part of the screen?
In case that's the chosen one, how can I add an event listener from a client-side Blazor app? I've failed trying to do so by having a script like this:
EDIT: I modified a little bit the code below to actually make it work after fixing the original, key mistake that I was asking about.
scripts/event-listener.js
window.JsFunctions = {
addKeyboardListenerEvent: function (foo) {
let serializeEvent = function (e) {
if (e) {
return {
key: e.key,
code: e.keyCode.toString(),
location: e.location,
repeat: e.repeat,
ctrlKey: e.ctrlKey,
shiftKey: e.shiftKey,
altKey: e.altKey,
metaKey: e.metaKey,
type: e.type
};
}
};
// window.document.addEventListener('onkeydown', function (e) { // Original error
window.document.addEventListener('keydown', function (e) {
DotNet.invokeMethodAsync('Numble', 'JsKeyDown', serializeEvent(e))
});
}
};
index.html
<head>
<!-- -->
<script src="scripts/event-listener.js"></script>
</head>
Invoking it through:
protected async override Task OnAfterRenderAsync(bool firstRender)
{
await jsRuntime.InvokeVoidAsync("JsFunctions.addKeyboardListenerEvent");
}
and having the following method trying to receive the events:
using Microsoft.AspNetCore.Components.Web;
using Microsoft.JSInterop;
namespace Numble;
public static class InteropKeyPress
{
[JSInvokable]
public static Task JsKeyDown(KeyboardEventArgs e)
{
Console.WriteLine("***********************************************");
Console.WriteLine(e.Key);
Console.WriteLine("***********************************************");
return Task.CompletedTask;
}
}
I manage to get the script executed, but I'm not receiving any events.
The name of the event is keydown, not onkeydown.

Unity WebGL editable configuration file

How to make an Unity WebGL project to read a kind of configuration file (any format), which is editable after the "Build" from Unity workspace.
Below is the sample of Build directory, which contains the packaged files
The use case is to have the backend API using by this WebGL project to be configurable at the hosting server, so that when the player/user browse it, it knows where to connect to the backend API.
The closest part I could explore currently is to implement the custom Javascript browser scripting. Any advice or any existing API could be used from Unity?
An update for the chosen solution for this question. The Javascript browser scripting method was used.
Total of 3 files to be created:
WebConfigurationManager.cs
Place it in the asset folder. This file is the main entry for the C# code, it decides where to get the web configuration, either via the default value from another C# class (while using the unity editor), or using the browser scripting method to retrieve (while browsing the distribution build via browser).
WebConfigurationManager.jslib
Place it the same folder as WebConfigurationManager.cs. This file is the javascript code, to be loaded by browser.
web-config.json
Your JSON configuration. The web configuration file could be hosted anywhere, example below placed under the root of the distribution build folder, you'll have to know where to load the file, for example https://<website>/web-config.json.
// WebConfigurationManager.cs
using System;
using UnityEngine;
using System.Runtime.InteropServices;
using AOT;
public class ConfigurationManager : MonoBehaviour
{
#if UNITY_WEBGL && !UNITY_EDITOR
// Load the web-config.json from the browser, and result will be passed via EnvironmentConfigurationCallback
public delegate void EnvironmentConfigurationCallback(System.IntPtr ptr);
[DllImport("__Internal")]
private static extern void GetEnvironmentConfiguration(EnvironmentConfigurationCallback callback);
void Start()
{
GetEnvironmentConfiguration(Callback);
}
[MonoPInvokeCallback(typeof(EnvironmentConfigurationCallback))]
public static void Callback(System.IntPtr ptr)
{
string value = Marshal.PtrToStringAuto(ptr);
try
{
var webConfig = JsonUtility.FromJson<MainConfig>(value);
// webConfig contains the value loaded from web-config.json. MainConfig is the data model class of your configuration.
}
catch (Exception e)
{
Debug.LogError($"Failed to read configuration. {e.Message}");
}
}
#else
void Start()
{
GetEnvironmentConfiguration();
}
private void GetEnvironmentConfiguration()
{
// do nothing on unity editor other than triggering the initialized event
// mock the configuration for the use of Unity editor
var testConfig = JsonUtility.FromJson<MainConfig>("{\n" +
" \"apiEndpoint\": \"ws://1.1.1.1:30080/events\",\n" +
" \"updateInterval\": 5\n" +
"}");
Debug.Log(testConfig.apiEndpoint);
Debug.Log(testConfig.updateInterval);
}
#endif
}
// WebConfigurationManager.jslib
mergeInto(LibraryManager.library, {
GetEnvironmentConfiguration: function (obj) {
function getPtrFromString(str) {
var buffer = _malloc(lengthBytesUTF8(str) + 1);
writeStringToMemory(str, buffer);
return buffer;
}
var request = new XMLHttpRequest();
// load the web-config.json via web request
request.open("GET", "./web-config.json", true);
request.onreadystatechange = function () {
if (request.readyState === 4 && request.status === 200) {
var buffer = getPtrFromString(request.responseText);
Runtime.dynCall('vi', obj, [buffer]);
}
};
request.send();
}
});

Can I call Ionic 4 / Capacitor Electron code from the Ionic part of the application?

I am investigating using Ionic 4/ Capacitor to target Windows via the Electron option, for an application where I want to use SQLite.
Using the Ionic Native SQLite plugin, which wraps this Cordova plugin, out of the box, as far as I can see, the Windows support is for UWP, and not Desktop, which runs using Electron in Ionic Capacitor wrapper.
My plan, was to see if I could use Electron SQLite package, and then call this from my Ionic application by making a wrapper class for the Ionic native similar to what I used to get browser support by following this tutoral
If I can call the Electron code from my Ionic app, then I can't see why this wouldn't work.
So, my question here is, can I call code (I will add functions to use the SQlite) I add to the hosting Electron application from within the Ionic (web) code? And if so, how?
Thanks in advance for any help
[UPDATE1]
Tried the following...
From an Ionic page, I have a button click handler where I raise an event..
export class HomePage {
public devtools() : void {
let emit = new EventEmitter(true);
emit.emit('myEvent');
var evt = new CustomEvent('myEvent');
window.dispatchEvent(evt);
}
Then within the Electron projects index.js, I tried..
mainWindow.webContents.on('myEvent', () => {
mainWindow.openDevTools();
});
const ipc = require('electron').ipcMain
ipc.on('myEvent', (ev, arg) => {
mainWindow.openDevTools();
});
But neither worked.
I should mention I know very little about Electron. This is my first exposure to it (via Capacitor)
In case someone is interested, this is how I solved this.
Im am using Ionic 4 / Capacitor + Vue 3.
In my entry file (app.ts) I have declared a global interface called Window as follows:
// app.ts
declare global { interface Window { require: any; } }
Then, I have written the following class:
// electron.ts
import { isPlatform } from '#ionic/core';
export class Electron
{
public static isElectron = isPlatform(window, 'electron');
public static getElectron()
{
if (this.isElectron)
{
return window.require('electron');
}
else
{
return null;
}
}
public static getIpcRenderer()
{
if (this.isElectron)
{
return window.require('electron').ipcRenderer;
}
else
{
return null;
}
}
public static getOs()
{
if (this.isElectron)
{
return window.require('os');
}
else
{
return null;
}
}
}
And I use it like this:
//electronabout.ts
import { IAbout } from './iabout';
import { Plugins } from '#capacitor/core';
import { Electron } from '../utils/electron';
export class ElectronAbout implements IAbout
{
constructor() { }
public async getDeviceInfo()
{
let os = Electron.getOs();
let devInfo =
{
arch: os.arch(),
platform: os.platform(),
type: os.type(),
userInfo: os.userInfo()
};
return devInfo;
}
public async showDeviceInfo()
{
const devInfo = await this.getDeviceInfo();
await Plugins.Modals.alert({ title: 'Info from Electron', message: JSON.stringify(devInfo) });
}
}
This is working but, of course, I still need to refactor the Electron class (electron.ts). Probably using the singleton pattern is a better idea.
I hope this helps.
Update
You can communicate from the render process with your main process (index.js) like this:
//somefile.ts
if (Electron.isElectron)
{
let ipc = Electron.getIpcRenderer();
ipc.once('hide-menu-button', (event) => { this.isMenuButtonVisible = false; });
}
//index.js
let newWindow = new BrowserWindow(windowOptions);
newWindow.loadURL(`file://${__dirname}/app/index.html`);
newWindow.webContents.on('dom-ready', () => {
newWindow.webContents.send('hide-menu-button');
newWindow.show();
});
I dug into this yesterday and have an example for you using angular(this should apply to ionic too).
in your service declare require so we can use it
//Below your imports
declare function require(name:string);
Then in whatever function you want to use it in:
// Require the ipcRenderer so we can emit to the ipc to call a function
// Use ts-ignore or else angular wont compile
// #ts-ignore
const ipc = window.require('electron').ipcRenderer;
// Send a message to the ipc
// #ts-ignore
ipc.send('test', 'google');
Then in the created index.js within the electron folder
// Listening for the emitted event
ipc.addListener('test', (ev, arg) => {
// console.log('ev', ev);
console.log('arg', arg);
});
Its probably not the correct way to access it but its the best way i could find. From my understanding the ipcRenderer is used for when you have multiple browsers talking to each other within electron. so in our situation it enables our web layer to communicate with the electron stuff

GTK allow open files with new vala application

I'm developing a media player with Vala and I want to be able to open audio files with this application (once it is installed).
In .descktop files I added the following MIME types to indicate which files can open (they are the same MIME types than in banshee):
MimeType=application/musepack;application/ogg;application/rss+xml;application/vnd.emusic-emusic_list;application/x-ape;application/x-democracy;application/x-extension-m4a;application/x-extension-mp4;application/x-flac;application/x-flash-video;application/x-id3;application/x-linguist;application/x-matroska;application/x-miro;application/x-musepack;application/x-netshow-channel;application/x-ogg;application/x-quicktime-media-link;application/x-quicktimeplayer;application/x-shorten;application/x-troff-msvideo;application/xspf+xml;audio/3gpp;audio/AMR;audio/AMR-WB;audio/ac3;audio/ape;audio/avi;audio/basic;audio/flac;audio/midi;audio/mp;audio/mp2;audio/mp3;audio/mp4;audio/mp4a-latm;audio/mpc;audio/mpeg;audio/mpeg3;audio/mpegurl;audio/musepack;audio/ogg;audio/vorbis;audio/wav;audio/wave;audio/x-amzxml;audio/x-ape;audio/x-flac;audio/x-it;audio/x-m4a;audio/x-matroska;audio/x-mod;audio/x-mp;audio/x-mp3;audio/x-mpc;audio/x-mpeg;audio/x-mpeg-3;audio/x-mpegurl;audio/x-ms-asf;audio/x-ms-asx;audio/x-ms-wax;audio/x-ms-wma;audio/x-musepack;audio/x-ogg;audio/x-pn-aiff;audio/x-pn-au;audio/x-pn-wav;audio/x-pn-windows-acm;audio/x-s3m;audio/x-sbc;audio/x-scpls;audio/x-speex;audio/x-tta;audio/x-vorbis;audio/x-vorbis+ogg;audio/x-wav;audio/x-wavpack;audio/x-xm;image/avi;image/x-pict;misc/ultravox;text/google-video-pointer;text/x-google-video-pointer;text/x-opml+xml;video/3gpp;video/avi;video/dv;video/fli;video/flv;video/mp4;video/mp4v-es;video/mpeg;video/msvideo;video/ogg;video/quicktime;video/vivo;video/vnd.divx;video/vnd.vivo;video/x-anim;video/x-avi;video/x-flc;video/x-fli;video/x-flic;video/x-flv;video/x-m4v;video/x-matroska;video/x-mpeg;video/x-mpg;video/x-ms-asf;video/x-ms-wm;video/x-ms-wmv;video/x-ms-wmx;video/x-ms-wvx;video/x-msvideo;video/x-nsv;video/x-ogm+ogg;video/x-theora;video/x-theora+ogg;x-scheme-handler/lastfm;x-scheme-handler/u1ms;
By donig this the application is shown in the dialog "Open with" when I click on a file.
Then, in my Gtk.Application class I added in the constructor:
class SomeClass (string[] args) {
Object (application_id: "some.id", flags: ApplicationFlags.HANDLES_OPEN);
// do stuff...
}
And finally I added the "open" method which is suposed to be called when a file is open with the aplication:
public override void open (File[] files, string hint) {
// do stuff ...
}
However, when I try to open a .mp3 file with my application appears a dialog which says:
"No es poden obrir els fitxers o uris amb aquesta aplicació"
in english:
"The files or uris can not be opened with this application"
So my question is: Am I missing something?
I've added MIME types in descktop file, I've activated the flag "HANDLES_OPEN" and I've implemented the method "open".
PD: I'm working with elementaryOS and I install my app with CMake build system.
In GLib/C terminology: You need to connect your implementation of open to the GApplication's ::open signal.
In Vala terminology: See signals:
class Foo : Object {
public signal void some_event (); // definition of the signal
public void method () {
some_event (); // emitting the signal (callbacks get invoked)
}
}
void callback_a () {
stdout.printf ("Callback A\n");
}
void callback_b () {
stdout.printf ("Callback B\n");
}
void main () {
var foo = new Foo ();
foo.some_event.connect (callback_a); // connecting the callback functions
foo.some_event.connect (callback_b);
foo.method ();
}
...
So, in Vala terms:
You need a reference to a GLib.Application.
You need to call app.connect(open); somewhere before your mainloop starts running.