I implemented a signature pad using https://github.com/ramsatt/Angular9SignaturePad/tree/master/src/app/_componets/signature-pad and it works fine on smaller devices but on iPad or bigger devices like 7" upwards, it doesn't work properly.
When drawing on the screen, the resulting line has an offset from where the user touched (Signature drawn doesn't appear directly under the pen as user draws).
please how can I fix this.
So I fixed it by adding the below code and calling it in ngOnInit
resizeCanvas() {
var width = this.signaturePadElement.nativeElement.width;
var height = this.signaturePadElement.nativeElement.height;
var ratio = Math.max(window.devicePixelRatio || 1, 1);
if (ratio <= 2) {
this.signaturePadElement.nativeElement.width = width * ratio;
this.signaturePadElement.nativeElement.height = height * ratio;
this.signaturePadElement.nativeElement
.getContext("2d")
.scale(ratio, ratio);
}
then do
ngOnInit(){
this.resizeCanvas()
}
this.signaturePadElement is your Element gotten using ViewChild()
I am working off of Apple's sample project related to using the ARMatteGenerator to generate a a MTLTexture that can be used as an occlusion matte in the people occlusion technology.
I would like to determine how I could run the generated matte through a CIFilter. In my code, I am "filtering" the matte like such;
func updateMatteTextures(commandBuffer: MTLCommandBuffer) {
guard let currentFrame = session.currentFrame else {
return
}
var targetImage: CIImage?
alphaTexture = matteGenerator.generateMatte(from: currentFrame, commandBuffer: commandBuffer)
dilatedDepthTexture = matteGenerator.generateDilatedDepth(from: currentFrame, commandBuffer: commandBuffer)
targetImage = CIImage(mtlTexture: alphaTexture!, options: nil)
monoAlphaCIFilter?.setValue(targetImage!, forKey: kCIInputImageKey)
monoAlphaCIFilter?.setValue(CIColor.red, forKey: kCIInputColorKey)
targetImage = (monoAlphaCIFilter?.outputImage)!
let drawingBounds = CGRect(origin: .zero, size: CGSize(width: alphaTexture!.width, height: alphaTexture!.height))
context.render(targetImage!, to: alphaTexture!, commandBuffer: commandBuffer, bounds: drawingBounds, colorSpace: CGColorSpaceCreateDeviceRGB())
}
When I go to composite the matte texture and backgrounds, there is no filtering effect applied to the matte. This is how the textures are being composited;
func compositeImagesWithEncoder(renderEncoder: MTLRenderCommandEncoder) {
guard let textureY = capturedImageTextureY, let textureCbCr = capturedImageTextureCbCr else {
return
}
// Push a debug group allowing us to identify render commands in the GPU Frame Capture tool
renderEncoder.pushDebugGroup("CompositePass")
// Set render command encoder state
renderEncoder.setCullMode(.none)
renderEncoder.setRenderPipelineState(compositePipelineState)
renderEncoder.setDepthStencilState(compositeDepthState)
// Setup plane vertex buffers
renderEncoder.setVertexBuffer(imagePlaneVertexBuffer, offset: 0, index: 0)
renderEncoder.setVertexBuffer(scenePlaneVertexBuffer, offset: 0, index: 1)
// Setup textures for the composite fragment shader
renderEncoder.setFragmentBuffer(sharedUniformBuffer, offset: sharedUniformBufferOffset, index: Int(kBufferIndexSharedUniforms.rawValue))
renderEncoder.setFragmentTexture(CVMetalTextureGetTexture(textureY), index: 0)
renderEncoder.setFragmentTexture(CVMetalTextureGetTexture(textureCbCr), index: 1)
renderEncoder.setFragmentTexture(sceneColorTexture, index: 2)
renderEncoder.setFragmentTexture(sceneDepthTexture, index: 3)
renderEncoder.setFragmentTexture(alphaTexture, index: 4)
renderEncoder.setFragmentTexture(dilatedDepthTexture, index: 5)
// Draw final quad to display
renderEncoder.drawPrimitives(type: .triangleStrip, vertexStart: 0, vertexCount: 4)
renderEncoder.popDebugGroup()
}
How could I apply the CIFilter to only the alphaTexture generated by the ARMatteGenerator?
I don't think you want to apply a CIFilter to the alphaTexture. I assume you're using Apple's Effecting People Occlusion in Custom Renderers sample code. If you watch this year's Bringing People into AR WWDC session, they talk about generating a segmentation matte using ARMatteGenerator, which is what is being done with alphaTexture = matteGenerator.generateMatte(from: currentFrame, commandBuffer: commandBuffer). alphaTexture is a MTLTexture that is essentially an alpha mask for where humans have been detected in the camera frame (i.e. complete opaque where a human is and completely transparent where a human is not).
Adding a filter to the alpha texture won't filter the final rendered image but will simply affect the mask that is used in the compositing. If you're trying to achieve the video linked in your previous question, I would recommend adjusting the metal shader where the compositing occurs. In the session, they point out that they compare the dilatedDepth and the renderedDepth to see if they should draw virtual content or pixels from the camera:
fragment half4 customComposition(...) {
half4 camera = cameraTexture.sample(s, in.uv);
half4 rendered = renderedTexture.sample(s, in.uv);
float renderedDepth = renderedDepthTexture.sample(s, in.uv);
half4 scene = mix(rendered, camera, rendered.a);
half matte = matteTexture.sample(s, in.uv);
float dilatedDepth = dilatedDepthTexture.sample(s, in.uv);
if (dilatedDepth < renderedDepth) { // People in front of rendered
// mix together the virtual content and camera feed based on the alpha provided by the matte
return mix(scene, camera, matte);
} else {
// People are not in front so just return the scene
return scene
}
}
Unfortunately, this is done sightly differently in the sample code, but it's still fairly easy to modify. Open up Shaders.metal. Find the compositeImageFragmentShader function. Toward the end of the function you'll see half4 occluderResult = mix(sceneColor, cameraColor, alpha); This is essentially the same operation as mix(scene, camera, matte); that we saw above. We're deciding if we should use a pixel from the scene or a pixel from camera feed based on the segmentation matte. We can easily replace the camera image pixel with an arbitrary rgba value by replacing cameraColor with a half4 that represents a color. For example, we could use half4(float4(0.0, 0.0, 1.0, 1.0)) to paint all of the pixels within the segmentation matte blue:
…
// Replacing camera color with blue
half4 occluderResult = mix(sceneColor, half4(float4(0.0, 0.0, 1.0, 1.0)), alpha);
half4 mattingResult = mix(sceneColor, occluderResult, showOccluder);
return mattingResult;
Of course, you can apply other effects as well. Dynamic grayscale static is pretty easy to achieve.
Above compositeImageFragmentShader add:
float random(float offset, float2 tex_coord, float time) {
// pick two numbers that are unlikely to repeat
float2 non_repeating = float2(12.9898 * time, 78.233 * time);
// multiply our texture coordinates by the non-repeating numbers, then add them together
float sum = dot(tex_coord, non_repeating);
// calculate the sine of our sum to get a range between -1 and 1
float sine = sin(sum);
// multiply the sine by a big, non-repeating number so that even a small change will result in a big color jump
float huge_number = sine * 43758.5453 * offset;
// get just the numbers after the decimal point
float fraction = fract(huge_number);
// send the result back to the caller
return fraction;
}
(taken from #twostraws ShaderKit)
Then modify compositeImageFragmentShader to:
…
float randFloat = random(1.0, cameraTexCoord, rgb[0]);
half4 occluderResult = mix(sceneColor, half4(float4(randFloat, randFloat, randFloat, 1.0)), alpha);
half4 mattingResult = mix(sceneColor, occluderResult, showOccluder);
return mattingResult;
You should get:
Finally, the debugger seems to have a hard time keeping up with the app. For me, when running attached Xcode, the app would freeze shortly after launch, but was typically smooth when running on its own.
I have created a WPF application where I need to allow a user to draw a rectangle on an existing loaded image(tif image) and have it save the coordinates/the portion of the rectangle as a separate image.
I am using the Leadtools.Windows.Controls reference and using the RasterImageViewer
Below is the code for the event handler when the user has completed drawing the rectangle.
private void ImageViewer_InteractiveUserRectangle(object sender, RectangleInteractiveEventArgs e)
{
if (e.Status == InteractiveModeStatus.End)
{
var img = ImageViewer.Image;
var top =Convert.ToInt32(e.Bounds.Top);
var left = Convert.ToInt32(e.Bounds.Left);
var width = Convert.ToInt32(e.Bounds.Width);
var height = Convert.ToInt32(e.Bounds.Height);
var rect = new Leadtools.LeadRect(left, top, width, height);
var cmd = new Leadtools.ImageProcessing.CropCommand(rect);
cmd.Run(img);
_codecs.Save(img, #"c:\temp\test.tif",
RasterImageFormat.CcittGroup4, 1, 1, 1, -1, CodecsSavePageMode.Append);
}
}
I am getting a separate cropped image, but it does not match the area drawn with the rectangle. I have tried various methods from the examples but they were all for Windows Forms applications and not WPF. Any help with what I am missing would be greatly appreciated.
The issue is that the ImageViewer UserRectangle bounds returns the coordinates in Control coordiates and you need to convert these to Image Coordinates which the Crop Command is looking for.
According to the Documentation here:
https://www.leadtools.com/help/leadtools/v19/dh/wl/rectangleinteractiveeventargs-bounds.html
The coordinates are always in control (display) to image coordinates.
You can use PointToImageCoordinates and BoundsToImageCoordinates to
map a value in control or display coordinates (what is on screen) to
image coordinates (actual x and y location in the image pixels). You
can use PointFromImageCoordinates and BoundsFromImageCoordinates to
map a value in image coordinates (actual x and y location in t he
image pixels) to control or display coordinates (what is on screen).
Here is the updated code to make it work for you project:
if (e.Status == Leadtools.Windows.Controls.InteractiveModeStatus.End)
{
var img = imageViewer.Image;
var imgRect = imageViewer.BoundsToImageCoordinates(e.Bounds);
var top = Convert.ToInt32(imgRect.Top);
var left = Convert.ToInt32(imgRect.Left);
var width = Convert.ToInt32(imgRect.Width);
var height = Convert.ToInt32(imgRect.Height);
var rect = new Leadtools.LeadRect(left, top, width, height);
var cmd = new Leadtools.ImageProcessing.CropCommand(rect);
cmd.Run(img);
_codecs.Save(img, #"c:\temp\test.tif",
RasterImageFormat.CcittGroup4, 1, 1, 1, -1, CodecsSavePageMode.Append);
}
When starting VR tool on mobile and watching directly ahead, I would like to have the view to show the whole model in the center of the screen. The view should be at a slight angle, so I could see the whole building floor. Currently it is directly ahead, which leaves you with a view where you cannot see the whole model. How could I achieve this?
For example, in this Autodesk example, the model is in the center when you enter VR.
http://viewervr.herokuapp.com/
Current code, with what I am trying to adjust the camera position
document.getElementById("toolbar-vrTool").addEventListener("click", function () {
let _navapi = viewer.navigation;
let _camera = _navapi.getCamera();
let xValue = viewer.getCamera().position.x;
let yValue = viewer.getCamera().position.y;
let zValue = viewer.getCamera().position.z;
zValue = zValue * 0.5;
yValue = (zValue * 0.7071) * -1;
_camera.position.set(xValue, yValue, zValue);
});
Current view
View I would like to have
There is a function named fitToView() which will do exactly what you want. But you need to wait for the geometry to be fully loaded before using it. I also added a call to setHomeViewFrom() in the example below to reset the Home position to the fitToView() position result for later navigation.
oViewer.addEventListener (Autodesk.Viewing.GEOMETRY_LOADED_EVENT, onViewerGeometryLoaded) ;
function onViewerGeometryLoaded () {
oViewer.removeEventListener (Autodesk.Viewing.GEOMETRY_LOADED_EVENT, onViewerGeometryLoaded) ;
oViewer.fitToView (true) ;
setTimeout (function () { oViewer.autocam.setHomeViewFrom (oViewer.navigation.getCamera ()) ; }, 1000) ;
}
I want to set exterior app's frame from my own project and I found a problem:
I can set its size and its position (but the thing is, I want it to be full screen, and even if I set the size to be full screen it does not set it's size to it (I have to press the button several times).
I think the problem is with its frame.
var frame = CGRectMake(0.0, 23.0, 1280.0, 770.0) //I post only this as size and position seem to work
let sizeCoo: AXValue = AXValueCreate(AXValueType(rawValue: kAXValueCGSizeType)!, &size)!.takeRetainedValue()
let positionCoo: AXValue = AXValueCreate(AXValueType(rawValue: kAXValueCGPointType)!, &position)!.takeRetainedValue()
let frameCoo: AXValue = AXValueCreate(AXValueType(rawValue: kAXValueCGRectType)!, &frame)!.takeRetainedValue()
let errorFrame = AXUIElementSetAttributeValue(myApp, "AXFrame", frameCoo)
Setting sizeCoo and positionCoo give me an error with raw value = 0 - everythng is ok, but setting frameCoo displays an error with rawValue = -25205 which means:
/*! The attribute is not supported by the AXUIElementRef. */
kAXErrorAttributeUnsupported = -25205
and I clearly see when I list attributes for this window there is the "AXFrame" attribute.
What am I doing wrong?