laserpilot / ofxCoreImage

This gives you several classes to easily use OSX's Core image filters within openFrameworks

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

convoluted api?

Ahbee opened this issue · comments

Why does the api look so complicated,it is not possible to design one like this?

class ofxCIFilter {
public:
    ofxCIFliter();
    ~ofxCIFliter();
    // in place processing
    virtual void apply(ofImage &image) = 0;
    // out of place processing
    virtual void apply(ofImage &dst,const ofImage &src)= 0;
};

class ofxCIBlur:public ofxCIFilter{
public:
    void setRadius(float r);
    apply(ofImage &image);
    apply(ofImage &dst,const ofImage &src);
private:
    float radius;
};

I looked at the reference for Core Image, and it should be easy to convert between ofImage and CoreImage, there is a function in CIImage that looks like this:

initWithData:options:

and there is one in CIContext that looks like this

render:toBitmap:rowBytes:bounds:format:colorSpace:

Using these 2 functions conversion between ofImage and CIImage should be possible.

hey! Thanks for the feedback - I'm actually not that familiar with CIImage and obj-c so this was kind of my first rodeo with making an addon like this, so I'm definitely looking for suggestions.

I don't find it particularly convoluted since each filter has so many different parameters and there are a few different paradigms i could follow for how to chain filters and stuff like that, but again...the apply paradigm could be another good way to look at it...

I could also do like ofxCIFilter, ofxCIComposite, ofxCITransition because a lot of the 'convoluted' methods are because im trying to cover all the different filter types that take 1,2,3 input images so it quickly gets out of hand

Thanks for suggestion to go ofImage-> CIImage...I wanted to give the user a more direct way to send stuff besides drawing to an FBO, so that makes sense to try what you suggested...will look into it

If you have any tips on how to go CIImage-> OFImage or ofTexture I've been looking for a clean way to do that as well so people can bring the CIImage back into OF Land for processing

I read more about Core Image. You don't have to create subclasses for each filter. Since every filter uses the same set of parameters. I can try implementing something when I have time. Right know take a look at this interface:

first declare String constants for each filter

typedef std::string OFX_FILTER_TYPE;

extern const string OFX_FILTER_TYPE_BOX_BLUR;
extern const string OFX_FILTER_TYPE_COLOR_CLAMP;
extern const string OFX_FILTER_TYPE_COLOR_CROSS_POLYNOMIAL;
extern const string OFX_FILTER_TYPE_ADDITION_COMPOSITIONG ;
extern const string OFX_FILTER_TYPE_BUMP_DISTORTION ;
extern const string OFX_FILTER_TYPE_CHECKERBOARD_GENERATOR;
extern const string OFX_FILTER_TYPE_AFIINE_TRANSORM ;
extern const string OFX_FILTER_TYPE_GAUSSIAN_GRADIENT ;
extern const string OFX_FILTER_TYPE_CIRCULAR_SCREEN ;
extern const string OFX_FILTER_TYPE_AREA_AVERAGE ;
extern const string OFX_FILTER_TYPE_SHARPEN_LUMINANCE ;
extern const string OFX_FILTER_TYPE_BLEND_WITH_ALPHA_MASK ;
extern const string OFX_FILTER_TYPE_AFFINE_CLAMP ;
extern const string OFX_FILTER_TYPE_BARS_SWIPE_TRANSITION ;

then define them in a cpp file

 const string OFX_FILTER_TYPE_BOX_BLUR = "CIBoxBlur";
 const string OFX_FILTER_TYPE_COLOR_CLAMP = "CIColorClamp";
 const string OFX_FILTER_TYPE_COLOR_CROSS_POLYNOMIAL = "CIColorCrossPolynomial";
 const string OFX_FILTER_TYPE_ADDITION_COMPOSITIONG = "CIAdditionCompositing";
 const string OFX_FILTER_TYPE_BUMP_DISTORTION = "CIBumpCIBumpDistortion";
 const string OFX_FILTER_TYPE_CHECKERBOARD_GENERATOR = "CICheckerboardGenerator";
 const string OFX_FILTER_TYPE_AFIINE_TRANSORM = "CIAffineTransform";
 const string OFX_FILTER_TYPE_GAUSSIAN_GRADIENT = "CIGaussianGradient";
 const string OFX_FILTER_TYPE_CIRCULAR_SCREEN = "CICircularScreen";
 const string OFX_FILTER_TYPE_AREA_AVERAGE = "CIAreaAverage";
 const string OFX_FILTER_TYPE_SHARPEN_LUMINANCE = "CISharpenLuminance";
 const string OFX_FILTER_TYPE_BLEND_WITH_ALPHA_MASK = "CIBlendWithAlphaMask";
 const string OFX_FILTER_TYPE_AFFINE_CLAMP = "CIAffineClamp";
 const string OFX_FILTER_TYPE_BARS_SWIPE_TRANSITION = "CIBarsSwipeTransition";

I think this is the ideal interface for a filter, this covers all filters. There is no need to subclass. For different filters, just pass a different string value to setup(). And getOutput() returns an ofImage of the final result.

class ofxCIFliter {
public:
    ofxCIFliter();
    ~ofxCIFliter();

    void setup(OFX_FILTER_TYPE filter);
    // gets the result of the filter
    ofImage getOutput();

    // all filters will use a subset of these parameters
    void setImage(const ofImage &image);
    void setBackgroundImage(const ofImage &backgroundImage);
    void setTime(float time);
    void setAffineTransform(const ofMatrix4x4 &affineTransform);
    void setScale(float scale);
    void setAspectRatio(float aspectRatio);
    void setCenter(const ofVec2f &center);
    void setRadius(const ofVec2f &radius);
    void setAngle(float angle);
    void setRefraction(float refraction);
    void setWidth(float width);
    void setSharpness(float sharpness);
    void setIntensity(float intensity);
    void setEV(float ev);
    void setSaturation(float saturation);
    void setColor(ofFloatColor color);
    void setBrightness(float brightness);
    void setContrast(float contrast);
    void setGradientImage(const ofImage& gradientImage);
    void setMaskImage(const ofImage& maskImage);
    void setShadingImage(const ofImage& maskImage);
    void setTargetImage(const ofImage& targetImage);
    void setExtent(ofRectangle extent);

private:
    CIFilter *_filter;

    CIImage *_image;
    CIImage *_backgroundImage;
    NSNumber *_time;
    NSAffineTransform *_affineTransform;
    NSNumber *_scale;
    NSNumber *_aspectRatio;
    CIVector *_center;
    NSNumber *_radius;
    NSNumber *_angle;
    NSNumber *_refraction;
    NSNumber *_width;
    NSNumber *_sharpness;
    NSNumber *_intensity;
    NSNumber *_EV;
    NSNumber *_saturation;
    CIColor *_color;
    NSNumber *_brightness;
    NSNumber *_contrast;
    CIImage *_gradientImage;
    CIImage *_maskImage;
    CIImage *_targetImage;
    CIVector *_extent;

private:
    static CIContext *_context;
    static CGColorSpaceRef colorSpace;
    static CIContext* getContext();
    static void convert(CIImage **dst,const ofImage &src);
    static void convert(ofImage &dst,CIImage *src);
    static NSAffineTransform* toNSAffineTransform(const ofMatrix4x4 affineTransform);
    static CIVector* toCIVector(const ofVec2f &v);
    static CIVector* toCIVector(const ofRectangle &v);
    static NSString* toNSString(const string& s);
};

Its your call, though. If you are willing to change your api. I can help, otherwise I'll probably just create another Core Image addon at some point , and call it something else. Thoughts?

Hmmm thats tempting, I can see how that would work better for some cases.

I think my original thinking when I made all these subclasses was so that it would be easy for a beginner to go in and see how everything ticks. Your implementation makes sense for more dynamic handling, but it would need more careful handling of the set methods I think, because I fear people thinking they could "setContrast" or "setAngle" on a gaussian blur, so we would either need to protect those methods and expose them public via a subclass, or have a check in each method that determines if that setting exists in that filter. We could also have a method that just returns a string of the possible settings for that filter like "string getAvailableSettings()"

We would also need to set the ranges of the effects using the right key/value pair since CI uses such strange number ranges on everything, and I kind of liked those being exposed in my impementation so you don't have to look up whether the range is 0-2PI or -PI to PI or -28PI to 28PI...those are all different set angle ranges the original filters return which is obnoxious. Although, I suppose for the ones that don't need x->y we could just set them 0-1 and ofMap the range from lowval to highval...depends on how people think about their ranges I guess.

I would also build out the input and output options for ease so it can take like an ofTexture and return one as well

I'd be willing to explore your implementation - maybe i'll create another branch called like "dynamic structure" and we can use that as the test until its ready for use

Do you know if its even necessary to do it the way I'm doing it with sharing a single GLContext from another object (ofxCoreImage), or does it even matter if they arent sharing contexts? I thought I've read that its faster if they are all the same context, but that may just be handled internally anyway.

Thanks again for suggestions!

Cool I'll create sample project for you to look at. I like the getAvailableSettings() function. As for as the context. I am bit uneasy with sharing the context with openFrameworks. It might lead to problems at some point. I think the best way is to create a CGBitmapContext, that CIContext draws to. That way we can draw straight into an ofImage. If you create a second openGL context, there is performance penalty switching between contexts. But right now I think am actually going to create a new CGContext every time the user calls getOutput(). I dont think there is too much performance penalty creating a new context every time,because I have seen opencv projects do this to convert CGImage->CVMat. see http://stackoverflow.com/questions/14332687/converting-uiimage-to-cvmat

when setting parameters, "setValue forKey" probably throws an exception if the key does not exist. So we can catch that and print an error message ,as well as print a list of the possible parameters

I made a sample project here https://github.com/Ahbee/CoreImage_Sample. It works with of 8.1, just clone it into your myApps folder. The api looks good and the code works, but unfortunately there seems to be a huge memory leak. The cause seems to be the 'render to bitmap' function in ofxCIFilter::getOutput(). I am not able to track down the reason for it. It might be a Core Image bug. Probably has something to to with sharing contexts. I am going to try and use two separate contexts next. Hey do you mind if I create and maintain a separate core Image add on. The problem is that I wrote a lot of code which would be hard for someone else to go through. I also want to add a lot more features(ios,face dectection) and api changes which you may or may not like. Thanks for your time!

Yeah please go ahead and do another addon - I made this because its something I've wanted for the couple years I've been using OF and I finally had time to make it, but I'm glad if someone else is going to be making something else in a similar vein - if you have concerns about the name, let me know if you think it makes more sense for me to do like ofxCoreImageFilter if I'm primarily focussing on 'fragment shaders' for beginners rather than fully building out the feature set - keep me updated and thanks for the comments!

I managed to create a separate context and get it to work. It works with glfw and the programmable renderer now. You can find it here https://github.com/Ahbee/ofxCoreImageFilters.

You mentioned in your readme that some filters had a memory leak. I found that this is because the CIContext caches information that never get released until you release the context. This is probably a bug on apple's part. My solution was to periodically release and reallocate the CIContext. By default I did this every 8000 frames. It fixes the memory issues and does not affect frame rates so its all good. Please let me know what you think of the addon, or if you run into problems. Thanks!

Awesome, I saw you posted it this morning and started looking through it. Looked pretty good to me, I'll take a look at it at some point today and see what you're doing behind the scenes.

What is the big switch you had to make to make it work in a GLFW window? I can probably uncover it in your code, but I'm curious what the difference is.

Also - I've heard quite a bit about the memory leak in 10.9 on NVIDIA cards and core image (I chat with the developers of VDMX occaisionally) - Apple has been on the hook to provide a fix for quite a while and they still haven't rolled one out, but it is definitely a known bug on their end.

Basically you have to create a separate openGL context for Core Image. If you look at createContext() that is where I do that. Also if you look at getOutput() I switch glcontexts before rendering. The memory leaks don't seem to be as bad in the 64 bit version. So my guess is apple is only working on the 64 bit version. Also performance is much better on the 64 bit version. But the hack that I did will keep memory usage under control in the 32 bit version. I ran both examples for 6 hours straight and they did not leak.