pixelogik / ColorCube

Dominant color extraction for iOS, macOS and Python

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

GContextDrawImage: invalid context 0x0

devsujith opened this issue · comments

CGContextDrawImage: invalid context 0x0. This is a serious error. This application, or a library it uses, is using an invalid context and is thereby contributing to an overall degradation of system stability and reliability. This notice is a courtesy: please fix this problem. It will become a fatal error in an upcoming update.

commented

Thanks for the hint.

This occurs when CGImageGetWidth and CGImageGetHeight return a value of 0 in rawPixelDataFromImage:

Thankyou…i made a similar approach for the fix.

On 29-Sep-2015, at 10:43 pm, Michael Ciarlo notifications@github.com wrote:

occurs when CGImageGetWidth and CGImageGetHeight

I'm getting this error, did you manage to fix it?

This was some months back. This issue comes when the image passed in parameter has null value in height or width. So first I made sure in the TapGesture function that the the rect I am creating from the touch location comes under the Image itself, and then checked if the array returned by extractColorsFromImage function has values or not so that it wont crash.
For me the main issue was when creating the rect from gesture delegate, the touch position returned when the image is zoomed caused me bugs in calculating the exact position in the original image.

commented

@txaidw Could you please check if this is caused by using images with zero height/width? Of course the class should in that case not do anything that leads to this but computing colors on empty images does not make sense.

I dont know what this has to do with any gesture recognizers btw, color cube does not know anything about touches.

In my case, when i run with UIImage(named: "test") it works, but when get a image from the camera, it doesn't. And yes, a already put the camera image in a UIImageView, and its fine. What can it be?

commented

What if you downscale your camera image a bit? So not passing in the original image coming from the camera but using a downscaled version (speeds up processing and perceptive significance of computed colors btw).

I'm actually using the minimum camera resolution, going to try with a little more. lets see

This is what i'm getting.. It works with a UIImage loaded from the bundle thought.. Any ideas?

Mar 5 11:36:35 CameraTests[1530] <Error>: CGContextDrawImage: invalid context 0x0. Backtrace: <-[CCColorCube findLocalMaximaInImage:flags:]+112> <-[CCColorCube findAndSortMaximaInImage:flags:]+84> <-[CCColorCube extractBrightColorsFromImage:avoidColor:count:]+116> <_TFC11CameraTests11CameraInput13captureOutputfS0_FTGSQCSo15AVCaptureOutput_21didOutputSampleBufferGSQCSo14CMSampleBuffer_ <_TToFC11CameraTests11CameraInput13captureOutputfS0_FTGSQCSo15AVCaptureOutput_21didOutputSampleBufferGSQCSo14CMSampleBuff <<redacted>+308> <<redacted>+192> <<redacted>+264> <<redacted>+232> <_dispatch_client_callout+16> <_dispatch_source_latch_and_call+2848> <_dispatch_source_invoke+808> <_dispatch_queue_drain+1576> <_dispatch_queue_invoke+464> <_dispatch_queue_drain+1576> <_dispatch_queue_invoke+464> <_dispatch_root_queue_drain+760> <_dispatch_worker_thread3+132> <_pthread_wqthread+1092>

This is how i'm calling:
`
public func captureOutput(captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, fromConnection connection: AVCaptureConnection!) {

    connection.videoOrientation = AVCaptureVideoOrientation(rawValue: UIApplication.sharedApplication().statusBarOrientation.rawValue)!

    guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else
    {
        return
    }
    let image = UIImage(CIImage: CIImage(CVPixelBuffer: pixelBuffer))
    let newColorsArray = CCColorCube().extractBrightColorsFromImage(image, avoidColor: nil, count: 1)

    dispatch_async(dispatch_get_main_queue()) {
        (self.preview as? UIImageView)?.image = image

        self.output?.backgroundColor = newColorsArray.first as? UIColor
    }
}

`

Seems like I found the problem:
let image = UIImage(CIImage: CIImage(CVPixelBuffer: pixelBuffer))
This line was making an image that was not supported by the library (who knows why)
This helper worked:
`class CameraUtil {
class func imageFromSampleBuffer(sampleBuffer :CMSampleBufferRef) -> UIImage {
let imageBuffer: CVImageBufferRef = CMSampleBufferGetImageBuffer(sampleBuffer)!
CVPixelBufferLockBaseAddress(imageBuffer, 0)
let baseAddress: UnsafeMutablePointer = CVPixelBufferGetBaseAddressOfPlane(imageBuffer, Int(0))

    let bytesPerRow: Int = CVPixelBufferGetBytesPerRow(imageBuffer)
    let width: Int = CVPixelBufferGetWidth(imageBuffer)
    let height: Int = CVPixelBufferGetHeight(imageBuffer)

    let colorSpace: CGColorSpaceRef = CGColorSpaceCreateDeviceRGB()!

    let bitsPerCompornent: Int = 8
    let bitmapInfo = CGBitmapInfo(rawValue: (CGBitmapInfo.ByteOrder32Little.rawValue | CGImageAlphaInfo.PremultipliedFirst.rawValue))

    let newContext: CGContextRef = CGBitmapContextCreate(baseAddress, width, height, bitsPerCompornent, bytesPerRow, colorSpace, bitmapInfo.rawValue)! as CGContextRef
    let imageRef: CGImageRef = CGBitmapContextCreateImage(newContext)!

    CVPixelBufferUnlockBaseAddress(imageBuffer, 0)
    let resultImage = UIImage(CGImage: imageRef, scale: 1.0, orientation: UIImageOrientation.Right)
    return resultImage
}

}`

For me its a solved case! [Closed]