[mac, opencv] displaying IplImage in cocoa

update: ended up not using the old version so i didn’t look at it much. today i needed it again and realized it was crap so i rewrote it and here’s a new, (hopefully) less buggy version that also handles single-channel images.

Through work i found the need to display an IplImage in cocoa.. after looking at CVOCV, James Hurleys example and a couple of other snippets i remembered that opencv can be built for iOS and started browsing through the source. Inside OpenCV-2.4.1/modules/highgui/src/window_cocoa.mm i found what i was looking for (CVView). i rewrote it a bit and came up with the following NSView subclass:


#import <Foundation/Foundation.h>
#import <opencv/cv.h>

@interface MyCVView : NSView {
	NSBitmapImageRep *bm;
@property (nonatomic, retain) NSImage *image;

- (void)setImagePrefs:(int)width		// image width
			   height:(int)height		// " height
			 channels:(int)nChannels	// usually 4/3/1 (rgba/rgb/monochrome)
			widthStep:(int)widthStep	// usually width*nChannels
- (void)setImageData:(IplImage *)img
- (void)drawRect:(NSRect)rect;



#import "MyCVView.h"

@implementation MyCVView 

@synthesize image;

- (id)init {
	if((self = [super init])) [self setCanDrawConcurrently:true];
	return self;

- (void)setImagePrefs:(int)width
	if(bm) [bm release];
	bm = [[NSBitmapImageRep alloc] initWithBitmapDataPlanes: NULL
												 pixelsWide: width
												 pixelsHigh: height
											  bitsPerSample: 8	// i only use IPL_DEPTH_8U, if you use others set it as an arg..
											samplesPerPixel: nChannels
												   hasAlpha: NO
												   isPlanar: NO
											 colorSpaceName: (nChannels == 1 ? NSDeviceWhiteColorSpace :  NSDeviceRGBColorSpace)
												bytesPerRow: widthStep
											   bitsPerPixel: nChannels*depth];
	if(image) [image release];
	image = [[NSImage alloc] init];
	[image addRepresentation:bm];

- (void)setImageData:(IplImage *)img
	if(!bm) [self setImagePrefs:img->width
	unsigned char *src = (unsigned char *)img->imageData;
	unsigned char *dst = [bm bitmapData];
	if(img->nChannels == 1) memcpy(dst, src, (img->width*img->height));
	{	// red-blue swap and flip incorporated
		for(int i = img->width * img->height - 1; i >= 0; i--)
			dst[i * 4 + 0] = src[2];
			dst[i * 4 + 1] = src[1];
			dst[i * 4 + 2] = src[0];
			src += ((CvMat *)img)->step - ((CvMat *)img)->cols;
	[self setNeedsDisplay:YES];
	if(shouldRelease) cvReleaseImage(&img);

- (void)drawRect:(NSRect)rect {
	[super drawRect:rect];
	// autoscaling - if you don't want it replace second part with comment below 
	NSRect ir = {{0,0}, {[self bounds].size.width, [self bounds].size.height}};
						// {[image size].width, [image size].height}};
	if(image != nil)
		[image drawInRect: ir
				 fromRect: NSZeroRect
				operation: NSCompositeSourceOver
				 fraction: 1.0];

- (void)dealloc
	if(bm) [bm release];
	if(image) [image release];
    [super dealloc];


This was built without warnings in xcode 4 on snow leopard and saves some cpu-cycles compared to the original version since the rb-swap and flip is stuffed into a the data-copy-loop. Note that you should not call it CVView (will cause a crash) since opencv already contains a class with that name.

// sluggo