Where is pi defined in mathjs? - mathjs

I have a custom bundle for mathjs that looks something like so:
var core = require('mathjs/core');
var math = core.create();
math.import(require('mathjs/lib/type'));
math.import(require('mathjs/lib/function/arithmetic'));
math.import(require('mathjs/lib/function/trigonometry'));
math.import(require('mathjs/lib/expression'));
which I then export. If I then try math.eval('pi'), I get an Exception:
Exception: Error: Undefined symbol pi
I don't see this error if I import the entire mathjs library but, then, that rather defeats the purpose of the small custom bundle.
Question: What is the minimal import so that math.eval('pi') returns 3.14...?

var core = require('mathjs/core');
var math = core.create();
math.import(require('mathjs/lib/type'));
math.import(require('mathjs/lib/expression'));
math.import(require('mathjs/lib/constants'));
console.log(math.eval('pi')) // 3.141592653589793
See the constants module in the github repository of mathjs.
The value of PI is taken from the standard, built-in Javascript object Math. See this.

Related

Nanopb strip package option raises error at import

I am struggling with nanopb to get the enum of another package which has * mangle_names:M_STRIP_PACKAGE in .options file. Here is a way to reproduce the problem easily :
I have a root_folder containing folder_A and folder_B.
In folder_A, I have file_A.proto and file_A.options :
file_A.proto:
syntax = "proto2";
package folder_A;
enum my_enum {
ENUM_0 = 0;
ENUM_1 = 1;
ENUM_2 = 2;
}
file_A.options:
* mangle_names:M_STRIP_PACKAGE
In folder_B, I have file_B.proto :
syntax = "proto2";
package folder_B;
import "folder_A/file_A.proto";
message dummy {
required folder_A.my_enum value = 1;
}
I try to generate proto file with the following command:
nanopb_generator.py -D . -I . -I .\folder_B\ .\folder_A\file_A.proto .\folder_B\file_B.proto
The script fails with error Exception: Could not find enum type folder_A_my_enum while generating default values for folder_B_dummy.
But if I remove the file_A.options, it works correctly.
Also if I replace the enum by a message, it works correctly even with file_A.options.
My question is : do you know if it is possible to use the option * mangle_names:M_STRIP_PACKAGE and import enum at the same time ?
I use nanopb-0.4.5.
Thank you !
Currently M_STRIP_PACKAGE does not work when there are imports from another package. I have added the problem to the issue tracker: https://github.com/nanopb/nanopb/issues/783
Imports with name mangling seem to work fine, as long as all imported files belong to the same package and have same name mangling settings.
It is also questionable whether stripping the package name from types is a good idea when you are using multiple package names. It sounds like a recipe for name collisions.

OpenModelica generated C-code: I need to make parametrised runs on different machines

I have a simple model on OpenModelica. I generated a C-code (using export FMU functionality) of the model and I want to adjust the code so as to make a parametrised study of a parameter and print the output. In other words, I want that when I run the executable, to pass an argument to the executable which contains the value of a certain parameter, and print the value of some output of the model in a file.
Is that possible?
Yes that is possible.
You mentioned FMU export, so I'll stick to this and it is probably the easiest way.
The binaries inside a FMU are compiled libraries containing all internal runtimes and stuff to simulate your exported model.
These libraries are usually handled by an FMU importing tool, e.g. OMSimulator (part of OpenModelica) or FMPy.
I wouldn't recomment changing the generated C code directly. It is difficult and you don't need to do it.
In your FMU import tool you can usually change values of varaibles and parameters via the fmi2SetXXX functions before runing a simulation. Most tools wrap that interface call into some function to change values. Have a look at the documentation of your tool.
So let's look at an example. I'm using OpenModelica to export helloWorld and OMSimulator from Python, but check your tool and use what you prefere.
model helloWorld
parameter Real a = 1;
Real x(start=1,fixed=true);
equation
der(x) = a*x;
end helloWorld
Export as 2.0 Model Exchange FMU in OMEdit with File->Export->FMU or right-click on your model and Export->FMU.
You can find your export settings in Tools->Options->FMI.
Now I have helloWorld.fmu and can simulate it with OMSimulator. I want to change the value of parameter helloWorld.a to 2.0 and use setReal to do so.
>>> from OMSimulator import OMSimulator
>>> oms = OMSimulator()
>>> oms.newModel("helloWorld")
>>> oms.addSystem("helloWorld.root", oms.system_sc)
>>> oms.addSubModel("helloWorld.root.fmu", "helloWorld.fmu")
>>> oms.instantiate("helloWorld")
>>> oms.setReal("helloWorld.root.fmu.a", 2.0)
>>> oms.initialize("helloWorld")
info: maximum step size for 'helloWorld.root': 0.001000
info: Result file: helloWorld_res.mat (bufferSize=10)
>>> oms.simulate("helloWorld")
>>> oms.terminate("helloWorld")
info: Final Statistics for 'helloWorld.root':
NumSteps = 1001 NumRhsEvals = 1002 NumLinSolvSetups = 51
NumNonlinSolvIters = 1001 NumNonlinSolvConvFails = 0 NumErrTestFails = 0
>>> oms.delete("helloWorld")
Now add a loop to this to iterate over different values of your parameter and do whatever post-processing you want.

Implementing TensorFlowjs model in Ionic framework with Capacitor

I am working on a project where I am using Ionic and TensorFlow for machine learning. I have converted my TensorFlow model to a tensorflowjs model. I have put the model.json file and shard files of the tensorflowjs model in the assets folder in Ionic. Basically, I have put my tensorflowjs model in the assets folder of ionic. I am wanting to use Capacitor to access the camera and allow users to take photos. Then, the photos will be passed to the tensorflowjs model in assets to get and display a prediction for that user.
Here is my typescript code:
import { Component, OnInit, ViewChild, ElementRef, Renderer2 } from '#angular/core';
import { Plugins, CameraResultType, CameraSource} from '#capacitor/core';
import { DomSanitizer, SafeResourceUrl} from '#angular/platform-browser';
import { Platform } from '#ionic/angular';
import * as tf from '#tensorflow/tfjs';
import { rendererTypeName } from '#angular/compiler';
import { Base64 } from '#ionic-native/base64/ngx';
import { defineCustomElements } from '#ionic/pwa-elements/loader';
const { Camera } = Plugins;
#Component({
  selector: 'app-predict',
  templateUrl: './predict.page.html',
  styleUrls: ['./predict.page.scss'],
})
export class PredictPage{
  linearModel : tf.Sequential;
  prediction : any;
  InputTaken : any;
  ctx: CanvasRenderingContext2D;
  pos = { x: 0, y: 0 };
  canvasElement : any;
  photo: SafeResourceUrl;
  model: tf.LayersModel;
  constructor(public el : ElementRef , public renderer : Renderer2 , public platform : Platform, private base64: Base64,
    private sanitizer: DomSanitizer) 
  {}
  
  async takePicture() {
    const image = await Camera.getPhoto({
        quality: 90,
        allowEditing: true,
        resultType: CameraResultType.DataUrl,
        source: CameraSource.Camera});
      const model = await tf.loadLayersModel('src/app/assets/model.json');
      this.photo = this.sanitizer.bypassSecurityTrustResourceUrl(image.base64String);
      defineCustomElements(window);
    const pred = await tf.tidy(() => {
          // Make and format the predications
        const output = this.model.predict((this.photo)) as any;
                                
          // Save predictions on the component
        this.prediction = Array.from(output.dataSync()); 
        });
  }
}
In this code, I have imported the necessary tools. Then, I have my constructor function and a takepicture() function. In the takepicture function, I have included functionality for the user to take pictures. However, I am having trouble with passing the pictures taken to the tensorflowjs model to get a prediction. I am passing the picture taken to the tensorflowjs model in this line of code:
const output = this.model.predict((this.photo)) as any;
However, I am getting an error stating that:
Argument of type 'SafeResourceUrl' is not assignable to parameter of type 'Tensor | Tensor[]'.\n Type 'SafeResourceUrl' is missing the following properties from type 'Tensor[]': length, pop, push, concat, and 26 more.
It would be appreciated if I could receive some guidance regarding this topic.
The model is expecting you to pass in a Tensor input, but you're passing it some other image format that isn't in the tfjs ecosystem. You should first convert this.photo to a Tensor, or perhaps easier, convert image.base64String to tensor.
Since you seem to be using node, try this code
// Move the base64 image into a buffer
const b = Buffer.from(image.base64String, 'base64')
// get the tensor
const image = tf.node.decodeImage(b)
// Compute the output
const output = this.model.predict(image) as any;
Other conversion solutions here: convert base64 image to tensor

Use TensorFlow model with Swift for iOS

We are trying to use TensorFlow Face Mesh model within our iOS app. Model details: https://drive.google.com/file/d/1VFC_wIpw4O7xBOiTgUldl79d9LA-LsnA/view.
I followed TS official tutorial for setting up the model: https://firebase.google.com/docs/ml-kit/ios/use-custom-models and also printed the model Input-Output using the Python script in the tutorial and got this:
INPUT
[ 1 192 192 3]
<class 'numpy.float32'>
OUTPUT
[ 1 1 1 1404]
<class 'numpy.float32'>
At this point, I'm pretty lost trying to understand what those numbers mean, and how do I pass the input image and get the output face mesh points using the model Interpreter. Here's my Swift code so far:
let coreMLDelegate = CoreMLDelegate()
var interpreter: Interpreter
// Core ML delegate will only be created for devices with Neural Engine
if coreMLDelegate != nil {
interpreter = try Interpreter(modelPath: modelPath,
delegates: [coreMLDelegate!])
} else {
interpreter = try Interpreter(modelPath: modelPath)
}
Any help will be highly appreciated!
What those numbers mean completely depends on the model you're using. It's unrelated to both TensorFlow and Core ML.
The output is a 1x1x1x1404 tensor, which basically means you get a list of 1404 numbers. How to interpret those numbers depends on what the model was designed to do.
If you didn't design the model yourself, you'll have to find documentation for it.

How to declare a Mongo Binary Object in Flask

I'm building a flask application where I will be serving small images. These images are stored in MongoDB as BinaryData. In a helper function, I can store the data with these lines of python:
a = {"file_name": f, "payload": Binary(article.read())}
ARTICLES.insert(a)
I'm trying to build a class that contains the image. However, I cannot find the correct field declaration
class BinaryFile(mongo.Document):
created_at = mongo.DateTimeField(default=datetime.datetime.now, required=True)
file_name = mongo.StringField(max_length=255, required=True)
payload = mongo.Binary()
producing this error:
AttributeError: 'MongoEngine' object has no attribute 'Binary'
Can anyone suggest the correct way to declare this value or am I completely off base? This page does not provide a way to declare a field as Binary: http://api.mongodb.org/python/current/api/bson/index.html
Thanks!
Gabe helped get me on the right path.
First, I had to decide whether to use standard Binary format or move to GridFS, I chose to stick with regular Binary.
What I didn't understand was that the DateTimeField and StringField were being provided by MongoEngine. Gabe's comment got me going that way and I found the MongoEngine fields docs: http://docs.mongoengine.org/apireference.html#fields
I called that and got the error here: mongoengine.fields.ImproperlyConfigured: PIL library was not found which was fixed by doing
pip install Pillow
So now I have
import datetime
from app import mongo
from flask import url_for
from bson.binary import Binary
class BinaryFile(mongo.Document):
created_at = mongo.DateTimeField(default=datetime.datetime.now, required=True)
file_name = mongo.StringField(max_length=255, required=True)
payload = mongo.ImageField(required=False)
and I'm on to the next error! See you soon!