I am using Metal to display 3D objects with texture.
On first load the texture is showing fine. But if I change it runtime with a new texture then the new one is getting merger with the old one.
My code is:
let texture = MetalTexture(resourceName: "", ext: "", mipmaped: true)
// texture.path = "\(NSBundle.mainBundle().bundlePath)/smlroot/models/bodies/FB020.png"
texture.path = pngPath
texture.loadTexture(device: device, commandQ: commandQ, flip: false)
super.readPly(vertices,faces: faces, texture: texture.texture)
Here I am creating a new texture object everytime I change the texture runtime. Then
Using this code to load it:
func loadTexture(device device: MTLDevice, commandQ: MTLCommandQueue, flip: Bool){
let image = UIImage(contentsOfFile: path)?.CGImage
let colorSpace = CGColorSpaceCreateDeviceRGB()
width = CGImageGetWidth(image!)
height = CGImageGetHeight(image!)
let rowBytes = width * bytesPerPixel
let context = CGBitmapContextCreate(nil, width, height, bitsPerComponent, rowBytes, colorSpace, CGImageAlphaInfo.PremultipliedLast.rawValue)
let bounds = CGRect(x: 0, y: 0, width: Int(width), height: Int(height))
CGContextClearRect(context!, bounds)
if flip == false{
CGContextTranslateCTM(context!, 0, CGFloat(self.height))
CGContextScaleCTM(context!, 1.0, -1.0)
}
CGContextDrawImage(context!, bounds, image!)
let texDescriptor = MTLTextureDescriptor.texture2DDescriptorWithPixelFormat(MTLPixelFormat.RGBA8Unorm, width: Int(width), height: Int(height), mipmapped: isMipmaped)
target = texDescriptor.textureType
texture = device.newTextureWithDescriptor(texDescriptor)
let pixelsData = CGBitmapContextGetData(context!)
let region = MTLRegionMake2D(0, 0, Int(width), Int(height))
texture.replaceRegion(region, mipmapLevel: 0, withBytes: pixelsData, bytesPerRow: Int(rowBytes))
if (isMipmaped == true){
generateMipMapLayersUsingSystemFunc(texture, device: device, commandQ: commandQ, block: { (buffer) -> Void in
print("mips generated")
})
}
print("mipCount:\(texture.mipmapLevelCount)")
}
while rendering the object I am using the bellow pipeline to load it:
renderEncoder.setVertexBuffer(vertexBuffer, offset: 0, atIndex: 0)
renderEncoder.setCullMode(.None)
renderEncoder.setFragmentTexture(texture, atIndex: 0)
if let samplerState = samplerState{
renderEncoder.setFragmentSamplerState(samplerState, atIndex: 0)
}
let nodeModelMatrix = self.modelMatrix(rotationY)
nodeModelMatrix.multiplyLeft(parentModelViewMatrix)
let uniformBuffer = bufferProvider.nextUniformsBuffer(projectionMatrix, modelViewMatrix: nodeModelMatrix, light: light)
renderEncoder.setVertexBuffer(uniformBuffer, offset: 0, atIndex: 1)
renderEncoder.setFragmentBuffer(uniformBuffer, offset: 0, atIndex: 1)
//change this.
//renderEncoder.drawPrimitives(.Triangle, vertexStart: 0, vertexCount: vertexCount)
renderEncoder.drawIndexedPrimitives(.Triangle, indexCount: indexCount, indexType: .UInt32, indexBuffer: indexBuffer, indexBufferOffset: 0)
And I am getting a overlapped texture. Can anyone help me to identify the problem?
Related
I have a round avatar image with a transparent background. I want to create a new round image of the same size out of the initial image, with a gradient background behind it. So it looks like standing in sky instead of having a transparent background.
Since I will use this image as tabbaritem’s image, I couldn’t use uiview and edit it’s background layer.
And to make it reusable I wanted to create a UIImage extension.
Below is what I do:
extension UIImage {
func gradientImage() -> UIImage? {
let width = self.size.width
let height = self.size.height
UIGraphicsBeginImageContextWithOptions(size, false, 0)
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedLast.rawValue)
let colorSpace = CGColorSpaceCreateDeviceRGB()
guard let bitmapContext = CGContext(data: nil,
width: Int(width),
height: Int(height),
bitsPerComponent: 8,
bytesPerRow: 0,
space: colorSpace,
bitmapInfo: bitmapInfo.rawValue) else { return nil }
let locations: [CGFloat] = [0.0, 1.0]
let top = R.color.duckDimDarkGrey()?.cgColor
let bottom = R.color.duckPencilDark()?.cgColor
let colors = [top, bottom] as CFArray
guard let gradient = CGGradient(colorsSpace: colorSpace, colors: colors, locations: locations) else {
return nil
}
bitmapContext.drawLinearGradient(gradient, start: CGPoint.zero, end: CGPoint(x: 0, y: size.height), options: CGGradientDrawingOptions())
guard let cgImage = UIGraphicsGetImageFromCurrentImageContext()?.cgImage else { return nil }
UIGraphicsEndImageContext()
let img = UIImage(cgImage: cgImage)
return img
}
}
Here is how I use it:
Let image1 = UIImage(named: “test.png”)
self.tabBar.items[3].image = image1.gradientImage()
However I am getting an empty image somehow.
I need to create a MTLTexture with my custom data (which is currently filled with 0), in order to do it I use such an implementation
private func createTexture(frame: UnsafeMutableRawPointer) -> MTLTexture? {
let width = 2048
let height = 2048
let textureDescriptor = MTLTextureDescriptor.texture2DDescriptor(
pixelFormat: MTLPixelFormat.rgba8Unorm,
width: width,
height: height,
mipmapped: false)
textureDescriptor.usage = [.shaderWrite, .shaderRead]
guard let texture: MTLTexture = device?.makeTexture(descriptor: textureDescriptor) else
{
logger?.log(severity: .error, msg: "create texture FAILED.")
return nil
}
let region = MTLRegion.init(origin: MTLOrigin.init(x: 0, y: 0, z: 0), size: MTLSize.init(width: texture.width, height: texture.height, depth: 4));
//MARK: >>> JUST FOR TEST
let count = width * height * 4
let stride = MemoryLayout<CChar>.stride
let alignment = MemoryLayout<CChar>.alignment
let byteCount = stride * count
let p = UnsafeMutableRawPointer.allocate(byteCount: byteCount, alignment: alignment)
let data = p.initializeMemory(as: CChar.self, repeating: 0, count: count)
//MARK: <<<
texture.replace(region: region, mipmapLevel: 0, withBytes: data, bytesPerRow: width * 4)
return texture
}
So, here I created a descriptor with 4 channels, then I created a region with depth: 4, then created UnsafeMutableRawPointer filled with data stride * count size, but I get an error in this line
texture.replace(region: region, mipmapLevel: 0, withBytes: data, bytesPerRow: width * 4)
_validateReplaceRegion:155: failed assertion `(origin.z + size.depth)(4) must be <= depth(1).'
what am I doing wrong?
The depth property in the following line is incorrect:
let region = MTLRegion.init(origin: MTLOrigin.init(x: 0, y: 0, z: 0), size: MTLSize.init(width: texture.width, height: texture.height, depth: 4));
The depth property describes the number of elements in the z dimension. For 2D texture it should be 1.
I'm having problems understanding how the pixelFormat of a MTLTexture relates to the properties of a NSBitmapImageRep?
In particular, I want to use a metal compute kernel (or the built in MPS method) to subtract an image from another one and KEEP the negative values temporarily.
I have a method that creates a MTLTexture from a bitmap with a specified pixelFormat:
func textureFrom(bitmap: NSBitmapImageRep, pixelFormat: MTLPixelFormat) -> MTLTexture? {
guard !bitmap.isPlanar else {
return nil
}
let region = MTLRegionMake2D(0, 0, bitmap.pixelsWide, bitmap.pixelsHigh)
var textureDescriptor = MTLTextureDescriptor()
textureDescriptor = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: pixelFormat, width: bitmap.pixelsWide, height: bitmap.pixelsHigh, mipmapped: false)
guard let texture = device.makeTexture(descriptor: textureDescriptor),
let src = bitmap.bitmapData else { return nil }
texture.replace(region: region, mipmapLevel: 0, withBytes: src, bytesPerRow: bitmap.bytesPerRow)
return texture
}
Then I use the textures to do some computation (like a subtraction) and when I'm done, I want to get a bitmap back. In the case of textures with a .r8Snorm pixelFormat, I thought I could do:
func bitmapFrom(r8SnormTexture: MTLTexture?) -> NSBitmapImageRep? {
guard let texture = r8SnormTexture,
texture.pixelFormat == .r8Snorm else { return nil }
let bytesPerPixel = 1
let imageByteCount = Int(texture.width * texture.height * bytesPerPixel)
let bytesPerRow = texture.width * bytesPerPixel
var src = [Float](repeating: 0, count: imageByteCount)
let region = MTLRegionMake2D(0, 0, texture.width, texture.height)
texture.getBytes(&src, bytesPerRow: bytesPerRow, from: region, mipmapLevel: 0)
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.none.rawValue)
let colorSpace = CGColorSpaceCreateDeviceGray()
let bitsPerComponent = 8
let context = CGContext(data: &src, width: texture.width, height: texture.height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo.rawValue)
guard let dstImageFilter = context?.makeImage() else {
return nil
}
return NSBitmapImageRep(cgImage: dstImageFilter)
}
But the negative values are not preserved, they are clamped to zero somehow...
Any insight on how swift goes from bitmap to texture and back would be appreciated.
I'm trying to place an icon (in form of an image) next to a text in a UILabel. The icons are imported into the assets in al three sizes and are not blurry at all when I simply place them in a normal UIImageView.
However, within the NSTextAttachment they suddenly become extremely blurry and are too big, as well.
I already tried several things on my own and also tried nearly every snippet I could find online - nothing helps. This is what I'm left over with:
func updateWinnableCoins(coins: Int){
let attachImg = NSTextAttachment()
attachImg.image = resizeImage(image: #imageLiteral(resourceName: "geld"), targetSize: CGSize(width: 17.0, height: 17.0))
attachImg.setImageHeight(height: 17.0)
let imageOffsetY:CGFloat = -3.0;
attachImg.bounds = CGRect(x: 0, y: imageOffsetY, width: attachImg.image!.size.width, height: attachImg.image!.size.height)
let attchStr = NSAttributedString(attachment: attachImg)
let completeText = NSMutableAttributedString(string: "")
let tempText = NSMutableAttributedString(string: "You can win " + String(coins) + " ")
completeText.append(tempText)
completeText.append(attchStr)
self.lblWinnableCoins.textAlignment = .left;
self.lblWinnableCoins.attributedText = completeText;
}
func resizeImage(image: UIImage, targetSize: CGSize) -> (UIImage) {
let newRect = CGRect(x: 0, y: 0, width: targetSize.width, height: targetSize.height).integral
UIGraphicsBeginImageContextWithOptions(targetSize, false, 0)
let context = UIGraphicsGetCurrentContext()
// Set the quality level to use when rescaling
context!.interpolationQuality = CGInterpolationQuality.default
let flipVertical = CGAffineTransform(a: 1, b: 0, c: 0, d: -1, tx: 0, ty: targetSize.height)
context!.concatenate(flipVertical)
// Draw into the context; this scales the image
context?.draw(image.cgImage!, in: CGRect(x: 0.0,y: 0.0, width: newRect.width, height: newRect.height))
let newImageRef = context!.makeImage()! as CGImage
let newImage = UIImage(cgImage: newImageRef)
// Get the resized image from the context and a UIImage
UIGraphicsEndImageContext()
return newImage
}
extension NSTextAttachment {
func setImageHeight(height: CGFloat) {
guard let image = image else { return }
let ratio = image.size.width / image.size.height
bounds = CGRect(x: bounds.origin.x, y: bounds.origin.y, width: ratio * height, height: height)
}
}
And this is how it looks:
The font size of the UILabel is 17, so I set the text attachment to be 17 big, too. When I set it to 9, it fits, but it's still very blurry.
What can I do about that?
Staying in SpriteKit, is it possible to create more "artistic" text with the vastly greater control TextKit provides, and then (somehow) convert these strings to images so they can be used as SKSpriteNodes?
I ask because I'd like to do some more serious kerning stuff... much greater spacing, and a few other things that aren't possible with SKLabels, but are part of TextKit, but I'd like them to be bitmaps as soon as I'm done getting them to look the way I want.
But I can't find a way to turn A TextKit into an image.
You can draw your text in a CGContext, then create a texture from it and assign that texture to a SKSpriteNode.
Here is an example from this GitHub project:
class ASAttributedLabelNode: SKSpriteNode {
required init?(coder aDecoder: NSCoder) {
super.init(coder: aDecoder)
}
init(size: CGSize) {
super.init(texture: nil, color: UIColor.clear, size: size)
}
var attributedString: NSAttributedString! {
didSet {
draw()
}
}
func draw() {
guard let attrStr = attributedString else {
texture = nil
return
}
let scaleFactor = UIScreen.main.scale
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGImageAlphaInfo.premultipliedLast.rawValue
guard let context = CGContext(data: nil, width: Int(size.width * scaleFactor), height: Int(size.height * scaleFactor), bitsPerComponent: 8, bytesPerRow: Int(size.width * scaleFactor) * 4, space: colorSpace, bitmapInfo: bitmapInfo) else {
return
}
context.scaleBy(x: scaleFactor, y: scaleFactor)
context.concatenate(CGAffineTransform(a: 1, b: 0, c: 0, d: -1, tx: 0, ty: size.height))
UIGraphicsPushContext(context)
let strHeight = attrStr.boundingRect(with: size, options: .usesLineFragmentOrigin, context: nil).height
let yOffset = (size.height - strHeight) / 2.0
attrStr.draw(with: CGRect(x: 0, y: yOffset, width: size.width, height: strHeight), options: .usesLineFragmentOrigin, context: nil)
if let imageRef = context.makeImage() {
texture = SKTexture(cgImage: imageRef)
} else {
texture = nil
}
UIGraphicsPopContext()
}
}