Vous êtes sur la page 1sur 44

Image Processing for Dummies with C# and GDI+ Part 1 - Per Pixel Filters

By Christian Graus

The first in a series of articles which will build an image processing library in C# and GDI+

Welcome to this, my first article in C#, and the first in a series on image processing. I figure between
Nish and Chris Losinger waiting to bust my chops, I should learn as much as anyone from this article.

Overview

The purpose of the series will be to build a class that allows any C# programmer access to common,
and not so common, image processing functionality. The reason we are doing it in C# is simply that I
want to learn it, but the functionality we use is available through GDI+ in C++, and indeed the code to
do the same thing using a DIBSECTION is not terribly different. This first article will focus on per pixel
filters, in other words, filters that apply the same algorithm to each pixel 'in place' with no regard for
the values in any other pixels. You will see as we progress that the code becomes somewhat more
complex when we start moving pixels or changing values based on calculations that take into account
surrounding pixel values.

The App

The app we will use is a basic Windows Forms application ( it is in fact my first ). I've included code to
load and save images using GDI+, and a menu to which I will add filters. The filters are all static
functions in a class called BitmapFilter, so that an image can be passed in ( C# passes complex
types in by reference ) and a bool returned to indicate success or failure. As the series progresses I
am sure the app will get some other nice functionality, such as scaling and warping, but that will
probably happen as the focus of an article after the core functionality is in place. Scrolling is achieved
in the standard manner, the Paint method uses the AutoScrollPosition property to find out our
scroll position, which is set by using the AutoScrollMinSize property. Zooming is achieved through
a double, which we set whenever we change the scale, and which is used to set the
AutoScrollMinSize anew, as well as to scale the Rectangle we pass into DrawImage in the Paint
method.

Pixel Access, a.k.a. Unsafe code, and other nastiness

My first real disappointment in building this code was to find that the BitmapData class in GDI+ does
not allow us to access the data it stores, except through a pointer. This means we need to use the
unsafe keyword to scope the block of code which accesses the data. The net effect of this is that a
highly security level is required for our code to execute, i.e. any code using the BitmapData class is
not likely to be run from a remote client. This is not an issue for us right now, though, and it is our
only viable option, as GetPixel/SetPixel is simply too slow for us to use iterating through bitmaps
of any real size.

The other down side is that this class is meant to be portable, but anyone using it will need to change
their project settings to support compiling of unsafe code.

A quirk I noticed from the first beta of GDI+ continues to this day, namely requesting a 24bitRGB
image will return a 24bitBGR image. BGR ( that is, pixels are stored as blue, green, red values ) is the
way Windows stored things internally, but I'm sure more than a few people will get a surprise when
they first use this function and realise they are not getting what they asked for.

Invert Filter

Here, then is our first, and most simple filter - it simply inverts a bitmap, meaning that we subtract
each pixel value from 255.

Collapse Copy Code


public static bool Invert(Bitmap b)
{
// GDI+ still lies to us - the return format is BGR, NOT RGB.
BitmapData bmData = b.LockBits(new Rectangle(0, 0, b.Width, b.Height),
ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
int stride = bmData.Stride;
System.IntPtr Scan0 = bmData.Scan0;
unsafe
{
byte * p = (byte *)(void *)Scan0;
int nOffset = stride - b.Width*3;
int nWidth = b.Width * 3;
for(int y=0;y < b.Height;++y)
{
for(int x=0; x < nWidth; ++x )
{
p[0] = (byte)(255-p[0]);
++p;
}
p += nOffset;
}
}

b.UnlockBits(bmData);

return true;
}

This example is so simple that it doesn't even matter that the pixels are out of order. The stride
member tells us how wide a single line is, and the Scan0 member is the pointer to the data. Within our
unsafe block we grab the pointer, and calculate our offset. All bitmaps are word aligned, and so there
can be a difference between the size of a row and the number of pixels in it. This padding must be
skipped, if we try and access it we will not simply fail, we will crash. We therefore calculate the offset
we need to jump at the end of each row and store it as nOffset.

The key thing when image processing is to do as much outside the loop as possible. An image of
1024x768 will contain 786432 individual pixels, a lot of extra overhead if we add a function call, or
create a variable inside the loops. In this case, our x loop steps through Width*3 iterations, when we
care about each individual color, we will step the width only, and increment our pointer by 3 for each
pixel.

That should leave the rest of the code pretty straightforward. We are stepping through each pixel, and
reversing it, as you can see here:
Grayscale filter

Subsequent examples will show less and less of the code, as you become more familiar with what the
boilerplate part of it does. The next, obvious filter is a grayscale filter. You might think that this would
involve simply summing the three color values and dividing by three, but this does not take into effect
the degree to which our eyes are sensitive to different colors. The correct balance is used in the
following code:

Collapse Copy Code


unsafe
{
byte * p = (byte *)(void *)Scan0;

int nOffset = stride - b.Width*3;

byte red, green, blue;

for(int y=0;y < b.Height;++y)


{
for(int x=0; x < b.Width; ++x )
{
blue = p[0];
green = p[1];
red = p[2];

p[0] = p[1] = p[2] = (byte)(.299 * red


+ .587 * green
+ .114 * blue);

p += 3;
}
p += nOffset;
}
}

As you can see, we are now iterating through the row b.Width times, and stepping through the
pointer in increments of 3, extracting the red, green and blue values individually. Recall that we are
pulling out bgr values, not rgb. Then we apply our formula to turn them into the grey value, which
obvious is the same for red, green and blue. The end result looks like this:
A note on the effects of filters

It's worthwhile observing before we continue that the Invert filter is the only non-destructive filter we
will look at. That is to say, the grayscale filter obviously discards information, so that the original
bitmap cannot be reconstructed from the data that remains. The same is also true as we move into
filters which take parameters. Doing a Brightness filter of 100, and then of -100 will not result in the
original image - we will lose contrast. The reason for that is that the values are clamped - the
Brightness filter adds a value to each pixel, and if we go over 255 or below 0 the value is adjusted
accordingly and so the difference between pixels that have been moved to a boundary is discarded.

Brightness filter

Having said that, the actual filter is pretty simple, based on what we already know:

Collapse Copy Code


for(int y=0;y<b.Height;++y)
{
for (int x = 0; x < nWidth; ++x)
{
nVal = (int) (p[0] + nBrightness);

if (nVal < 0) nVal = 0;


if (nVal > 255) nVal = 255;

p[0] = (byte)nVal;

++p;
}
p += nOffset;
}

The two examples below use the values of 50 and -50 respectively, both on the original image
Contrast

The operation of contrast is certainly the most complex we have attempted. Instead of just moving all
the pixels in the same direction, we must either increase or decrease the difference between groups of
pixels. We accept values between -100 and 100, but we turn these into a double between the values
of 0 and 4.

Collapse Copy Code


if (nContrast < -100) return false;
if (nContrast > 100) return false;

double pixel = 0, contrast = (100.0+nContrast)/100.0;

contrast *= contrast;

My policy has been to return false when invalid values are passed in, rather than clamp them, because
they may be the result of a typo, and therefore clamping may not represent what is wanted, and also
so users can find out what values are valid, and thus have a realistic expectation of what result a given
value might give.

Our loop treats each color in the one iteration, although it's not necessary in this case to do it that
way.

Collapse Copy Code


red = p[2];

pixel = red/255.0;
pixel -= 0.5;
pixel *= contrast;
pixel += 0.5;
pixel *= 255;
if (pixel < 0) pixel = 0;
if (pixel > 255) pixel = 255;
p[2] = (byte) pixel;

We turn the pixel into a value between 0 and 1, and subtract .5. The net result is a negative value for
pixels that should be darkened, and positive for values we want to lighten. We multiply this value by
our contrast value, then reverse the process. Finally we clamp the result to make sure it is a valid color
value. The following images use contrast values of 30 and -30 respectively.

Gamma

First of all, an explanation of this filter. The following explanation of gamma was found on the web: In
the early days of television it was discovered that CRT's do not produce a light intensity that is
proportional to the input voltage. Instead, the intensity produced by a CRT is proportional to the input
voltage raised to the power gamma. The value of gamma varies depending on the CRT, but is usually
close to 2.5. The gamma response of a CRT is caused by electrostatic effects in the electron gun. In
other words, the blue on my screen might well not be the same as the blue on your screen. A gamma
filter attempts to correct for this. It does this by building a gamma ramp, an array of 256 values for
red, green and blue based on the gamma value passed in (between .2 and 5). The array is built like
this:

Collapse Copy Code


byte [] redGamma = new byte [256];
byte [] greenGamma = new byte [256];
byte [] blueGamma = new byte [256];

for (int i = 0; i < 256; ++i)


{
redGamma[i] = (byte)Math.Min(255, (int)(( 255.0
* Math.Pow(i/255.0, 1.0/red)) + 0.5));
greenGamma[i] = (byte)Math.Min(255, (int)(( 255.0
* Math.Pow(i/255.0, 1.0/green)) + 0.5));
blueGamma[i] = (byte)Math.Min(255, (int)(( 255.0
* Math.Pow(i/255.0, 1.0/blue)) + 0.5));
}
You'll note at this point in development I found the Math class.

Having built this ramp, we step through our image, and set our values to the values stored at the
indices in the array. For example, if a red value is 5, it will be set to redGamma[5]. The code to
perform this operation is self evident, I'll jump right to the examples. I've used Gamma values of .6
and 3 for the two examples, with the original as always first for comparison. I used the same values
for red, green and blue, but the filter allows them to differ.

Color Filter

Our last filter is a color filter. It is very simple - it just adds or subracts a value to each color. The most
useful thing to do with this filter is to set two colors to -255 in order to strip them and see one color
component of an image. I imagine by now you'd know exactly what that code will look like, so I'll give
you the red, green and blue components of my son to finish with. I hope you found this article
informative, the next will cover convolution filters, such as edge detection, smoothing, sharpening,
simple embossing, etc. See you then !!!
I was just thinking that you could turn your code on its head and allocate your own byte array to store
the image data and get its address via a GCHandle object. You can create a Bitmap object from your
own BitmapData object. Something like the following:

byte[,,] bytedata = new byte[480,640,4];

GCHandle h = GCHandle.Alloc(bytedata, GCHandleType.Pinned);

BitmapData bd = new BitmapData();


bd.PixelFormat = PixelFormat.Format32bppArgb;
bd.Scan0 = h.AddrOfPinnedObject();
bd.Width = 640;
bd.Height = 480;
bd.Stride = 640 * 4;

Bitmap bmp = new Bitmap(bd.Width, bd.Height, bd.Stride, bd.PixelFormat, h.AddrOfPinnedObject());

It would be a good idea to free the GCHandle after disposing of the bitmap.

I doubt that manipulation would be as fast as using unsafe code, but the code is generally easier to
understand.

Another idea would be to create an ARGB struct and use a two-dimensional array of those. The array
needs to be indexed using
bytedata[y, x, colorcomponent] or argbdata[y, x].

Hi,

this is a tangent and only intended to point out a small error which in no way impedes upon the
usefulness of your contribution. I haven't come far enough to say more than it looks very interesting
and well written so far, I hope it continues like that!

To the point: You write "C# passes complex types in by reference", but either you were just a little
sloppy with your wording, or you are confusing the concepts of value or reference types on the one
hand with passing BY reference or value on the other hand. This isn't exactly a crime but it is
something a lot of people are confused about, so I figure if you can edit it so you don't contribute to
this confusion it would be a good thing.

In C# as in any .net language there are value types and reference types. When we declare a variable
of a type that is a value type, the variable is essentially a memory location where a binary
representation of the value is stored. All the primitive types except string are value types. Strings and
most non-primitive types (all classes, but not structs) are reference types. A variable of a type that is a
reference type is a memory location that stores a reference to another location, which stores either
more references, or values.

Now, passing BY reference or value is completely distinct concept. A value type can be passed by value
OR reference, and so can a reference type. What it means is this: When passing by value, a copy of
the variable passed is allocated (on the stack) and available to the function. When passing by
reference, a reference to the variable is placed on the stack.

So, if a value type is passed by value, the real "data" is copied to the stack and the method
manipulates a copy of the data rather than the external data itself. If a reference type is passed by
value, a copy of the reference is put on the stack. So if the method manipulates the referenced object
it sort of looks as if it was passed by reference - the external reference and the method parameter
refer to the same object, after all. However, when a method changes the reference itself the difference
becomes apparent for reference types as well:

// Modify the referenced object


void f(Person p) { p.Name = "Tata"; }

// Modify only the on-the-stack copy of the reference to the object.


void g(Person p) { p = null; }

// Modify the *external* variable, which is a reference


void h(ref Person p) { p = null; }

The first method is why people confuse passing by reference with reference types; it appears that we
are "modifying the external variable" when we are not. We are *dereferencing* it (with the dot
operator) and modifying the referenced object, hence the fact that the reference to the object is a copy
of the external reference becomes transparent. But the next two examples clearly illustrate the
distinction between the two calling conventions.

I hope this wasn't too much lecturing, I am only trying to help!!

How can i increase the color of a RGB bitmap image if it matches an RGB color standard.
For example.

I want that pixels whis


R > 230 and G < 25 and B < 25 be set as R=255, G=0,B=0.
R < 25 and G > 230 and B < 25 be set as R=0, G=255,B=0.
R < 25 and G < 25 and B > 230 be set as R=0, G=0,B=255.

R < 25 and G < 25 and B < 25 be set as R=0,G=0,B=0.

other combinations be set to R=255,G=255,B=255.

Thanks

It is really a great article.

I am trying to change a particular color with other one using this code. e.g. If the picture has White
Color I want to change it with Black. for that purpose I have done the following changes. But I am not
getting the desired results. Can you please give me some direction what I am missing here.
Thanx in advance
public static bool ChangeColor(Bitmap b, int red, int green, int blue, int red1, int green1, int blue1)
{
if (red < -255 || red > 255) return false;
if (green < -255 || green > 255) return false;
if (blue < -255 || blue > 255) return false;
if (red1 < -255 || red1 > 255) return false;
if (green1 < -255 || green1 > 255) return false;
if (blue1 < -255 || blue1 > 255) return false;

// GDI+ still lies to us - the return format is BGR, NOT RGB.


BitmapData bmData = b.LockBits(new Rectangle(0, 0, b.Width, b.Height), ImageLockMode.ReadWrite,
PixelFormat.Format24bppRgb);

int stride = bmData.Stride;


System.IntPtr Scan0 = bmData.Scan0;

unsafe
{
byte * p = (byte *)(void *)Scan0;

int nOffset = stride - b.Width*3;

for(int y=0;y<B.HEIGHT;++Y)
{
for(int x=0; x < b.Width; ++x )
{
if(p[2] == red1 && p[1] == green1 && p[0] == blue1)
{
p[2]=(byte)red;
p[1]=(byte)green;
p[0]=(byte)blue;
}

p += 3;
}
p += nOffset;
}
}

b.UnlockBits(bmData);

return true;
}
Image Processing for Dummies with C# and GDI+ Part 2 - Convolution Filters
By Christian Graus

The second in a series of articles which will build an image processing library in C# and GDI+.

Overview

Welcome back for the second installment in this series. This installment serves as an introduction to
the world of convolution filters. It is also the first version of our program that offers one level of undo.
We'll build on that later, but for now I thought it mandatory that you be able to undo your experiments
without having to reload the image every time.

So what is a convolution filter ? Essentially, it's a matrix, as follows:

The idea is that the pixel we are processing, and the eight that surround it, are each given a weight.
The total value of the matrix is divided by a factor, and optionally an offset is added to the end value.
The matrix above is called an identity matrix, because the image is not changed by passing through it.
Usually the factor is the value derived from adding all the values in the matrix together, which ensures
the end value will be in the range 0-255. Where this is not the case, for example, in an embossing
filter where the values add up to 0, an offet of 127 is common. I should also mention that convolution
filters come in a variety of sizes, 7x7 is not unheard of, and edge detection filters in particular are not
symmetrical. Also, the bigger the filter, the more pixels we cannot process, as we cannot process
pixels that do not have the number of surrounding pixels our matrix requires. In our case, the outer
edges of the image to a depth of one pixel will go unprocessed.

A Framework

First of all we need to establish a framework from which to write these filters, otherwise we'll find
ourselves writing the same code over and again. As our filter now relies on surrounding values to get
a result, we are going to need a source and a destination bitmap. I tend to create a copy of the
bitmap coming in and use the copy as the source, as it is the one getting discarded in the end. To
facilitate this, I define a matrix class as follows:

Collapse Copy Code


public class ConvMatrix
{
public int TopLeft = 0, TopMid = 0, TopRight = 0;
public int MidLeft = 0, Pixel = 1, MidRight = 0;
public int BottomLeft = 0, BottomMid = 0, BottomRight = 0;
public int Factor = 1;
public int Offset = 0;
public void SetAll(int nVal)
{
TopLeft = TopMid = TopRight = MidLeft = Pixel = MidRight =
BottomLeft = BottomMid = BottomRight = nVal;
}
}

I'm sure you noticed that it is an identity matrix by default. I also define a method that sets all the
elements of the matrix to the same value.

The pixel processing code is more complex than our last article, because we need to access nine pixels,
and two bitmaps. I do this by defining constants for jumping one and two rows ( because we want to
avoid calculations as much as possible in the main loop, we define both instead of adding one to itself,
or multiplying it by 2 ). We can then use these values to write our code. As our initial offset into the
different color is 0, 1, and 2, we end up with 3 and 6 added to each of those values to create indices
for three pixels across, and use our constants to add the rows. In order to ensure we don't have any
values jumping from the bottom of the image to the top, we need to create one int, which is used to
calculate each pixel value, then clamped and stored. Here is the entire function:

Collapse Copy Code


public static bool Conv3x3(Bitmap b, ConvMatrix m)
{
// Avoid divide by zero errors
if (0 == m.Factor)
return false; Bitmap

// GDI+ still lies to us - the return format is BGR, NOT RGB.


bSrc = (Bitmap)b.Clone();
BitmapData bmData = b.LockBits(new Rectangle(0, 0, b.Width, b.Height),
ImageLockMode.ReadWrite,
PixelFormat.Format24bppRgb);
BitmapData bmSrc = bSrc.LockBits(new Rectangle(0, 0, bSrc.Width, bSrc.Height),
ImageLockMode.ReadWrite,
PixelFormat.Format24bppRgb);
int stride = bmData.Stride;
int stride2 = stride * 2;

System.IntPtr Scan0 = bmData.Scan0;


System.IntPtr SrcScan0 = bmSrc.Scan0;

unsafe {
byte * p = (byte *)(void *)Scan0;
byte * pSrc = (byte *)(void *)SrcScan0;
int nOffset = stride - b.Width*3;
int nWidth = b.Width - 2;
int nHeight = b.Height - 2;

int nPixel;

for(int y=0;y < nHeight;++y)


{
for(int x=0; x < nWidth; ++x )
{
nPixel = ( ( ( (pSrc[2] * m.TopLeft) +
(pSrc[5] * m.TopMid) +
(pSrc[8] * m.TopRight) +
(pSrc[2 + stride] * m.MidLeft) +
(pSrc[5 + stride] * m.Pixel) +
(pSrc[8 + stride] * m.MidRight) +
(pSrc[2 + stride2] * m.BottomLeft) +
(pSrc[5 + stride2] * m.BottomMid) +
(pSrc[8 + stride2] * m.BottomRight))
/ m.Factor) + m.Offset);

if (nPixel < 0) nPixel = 0;


if (nPixel > 255) nPixel = 255;
p[5 + stride]= (byte)nPixel;

nPixel = ( ( ( (pSrc[1] * m.TopLeft) +


(pSrc[4] * m.TopMid) +
(pSrc[7] * m.TopRight) +
(pSrc[1 + stride] * m.MidLeft) +
(pSrc[4 + stride] * m.Pixel) +
(pSrc[7 + stride] * m.MidRight) +
(pSrc[1 + stride2] * m.BottomLeft) +
(pSrc[4 + stride2] * m.BottomMid) +
(pSrc[7 + stride2] * m.BottomRight))
/ m.Factor) + m.Offset);

if (nPixel < 0) nPixel = 0;


if (nPixel > 255) nPixel = 255;
p[4 + stride] = (byte)nPixel;

nPixel = ( ( ( (pSrc[0] * m.TopLeft) +


(pSrc[3] * m.TopMid) +
(pSrc[6] * m.TopRight) +
(pSrc[0 + stride] * m.MidLeft) +
(pSrc[3 + stride] * m.Pixel) +
(pSrc[6 + stride] * m.MidRight) +
(pSrc[0 + stride2] * m.BottomLeft) +
(pSrc[3 + stride2] * m.BottomMid) +
(pSrc[6 + stride2] * m.BottomRight))
/ m.Factor) + m.Offset);

if (nPixel < 0) nPixel = 0;


if (nPixel > 255) nPixel = 255;
p[3 + stride] = (byte)nPixel;

p += 3;
pSrc += 3;
}

p += nOffset;
pSrc += nOffset;
}
}

b.UnlockBits(bmData);
bSrc.UnlockBits(bmSrc);
return true;
}

Not the sort of thing you want to have to write over and over, is it ? Now we can use our ConvMatrix
class to define filters, and just pass them into this function, which does all the gruesome stuff for us.

Smoothing

Given what I've told you about the mechanics of this filter, it is obvious how we create a smoothing
effect. We ascribe values to all our pixels, so that the weight of each pixel is spread over the
surrounding area. The code looks like this:

Collapse Copy Code


public static bool Smooth(Bitmap b, int nWeight /* default to 1 */)
{
ConvMatrix m = new ConvMatrix();
m.SetAll(1);
m.Pixel = nWeight;
m.Factor = nWeight + 8;

return BitmapFilter.Conv3x3(b, m);


}
As you can see, it's simple to write the filters in the context of our framework. Most of these filters
have at least one parameter, unfortunately C# does not have default values, so I put them in a
comment for you. The net result of apply this filter several times is as follows:

Gaussian Blur

Gaussian Blur filters locate significant color transitions in an image, then create intermediary colors to
soften the edges. The filter looks like this:

Gaussian Blur
1 2 1
2 4 2
1 2 1 /16+0

The middle value is the one you can alter with the filter provided, you can see that the default value
especially makes for a circular effect, with pixels given less weight the further they go from the edge.
In fact, this sort of smoothing generates an image not unlike an out of focus lens.

Sharpen

On the other end of the scale, a sharpen filter looks like this:

Sharpen
0 -2 0
-2 11 -2
0 -2 0 /3+0

If you compare this to the gaussian blur you'll note it's almost an exact opposite. It sharpens an image
by enhancing the difference between pixels. The greater the difference between the pixels that are
given a negative weight and the pixel being modified, the greater the change in the main pixel value.
The degree of sharpening can be adjusted by changing the centre weight. To show the effect better I
have started with a blurred picture for this example.

Mean Removal

The Mean Removal filter is also a sharpen filter, it looks like this:

Mean Removal
-1 -1 -1
-1 9 -1
-1 -1 -1 /1+0

Unlike the previous filter, which only worked in the horizontal and vertical directions, this one spreads
it's influence diagonally as well, with the following result on the same source image. Once again, the
central value is the one to change in order to change the degree of the effect.

Embossing

Probably the most spectacular filter you can do with a convolution filter is embossing. Embossing is
really just an edge detection filter. I'll cover another simple edge detection filter after this and you'll
notice it's quite similar. Edge detection generally works by offsetting a positive and a negative value
across an axis, so that the greater the difference between the two pixels, the higher the value
returned. With an emboss filter, because our filter values add to 0 instead of 1, we use an offset of
127 to brighten the image, otherwise much of it would clamp to black.

The filter I have implemented looks like this:

Emboss Laplascian
-1 0 -1
0 4 0
-1 0 -1 /1+127

and it looks like this:

As you might have noticed, this emboss works in both diagonal directions. I've also included a custom
dialog where you can enter your own filters, you might like to try some of these for embossing:

Horz/Vertical All Directions Lossy Horizontal Only Vertical Only


0 -1 0 -1 -1 -1 1 -2 1 0 0 0 0 -1 0
-1 4 -1 -1 8 -1 -2 4 -2 -1 2 -1 0 0 0
0 -1 0 /1+127 -1 -1 -1 /1+127 -2 1 -2 /1+127 0 0 0 /1+127 0 1 0 /1+127

The horizontal and vertical only filters differ for no other reason than to show two variations. You can
actually rotate these filters as well, by rotating the values around the central point. You'll notice the
filter I have used is the horz/vertical filter rotated by one degree, for example.

Let's not get carried away

Although this is kinda cool, you will notice if you run Photoshop that it offers a lot more functionality
than the emboss I've shown you here. Photoshop creates an emboss using a more specifically written
filter, and only part of that functionality can be simulated using convolution filters. I have spent some
time writing a more flexible emboss filter, once we've covered bilinear filtering and the like, I may
write an article on a more complete emboss filter down the track.

Edge Detection

Finally, just a simple edge detection filter, as a foretaste of the next article, which will explore a
number of ways to detect edges. The filter looks like this:

Edge Detect
1 1 1
0 0 0
-1 -1 -1 /1+127

Like all edge detection filters, this filter is not concerned with the value of the pixel being examined,
but rather in the difference between the pixels surrounding it. As it stands it will detect a horizontal
edge, and, like the embossing filters, can be rotated. As I said before, the embossing filters are
essentially doing edge detection, this one just heightens the effect.

What's in store

The next article will be covering a variety of edge detection methods. I'd also encourage you to search
the web for convolution filters. The comp.graphics.algorithms newsgroup tends to lean towards 3D
graphics, but if you search an archive like google news for 'convolution' you'll find plenty more ideas to
try in the custom dialog.
Image Processing for Dummies with C# and GDI+ Part 3 - Edge Detection Filters
By Christian Graus

The third in a series of articles which will build an image processing library in C# and GDI+

Introduction

Welcome back. This is probably goign to be the last in this series for a while, I want to focus on some
other things to learn some more C#, and come back to this when I have some more time.

Overview

This article will focus on one of the most common image processing tasks, detecting edges. We will
look at a number of ways to do this, and also look at one use for such information, an edge enhance
filter. We will start with what we know from the last article, using convolution filters to detect edges.

Convolution Filters - Sobel, Prewitt and Kirsh

We will use three different convolution masks to detect edges, named presumably after their
inventors. In each case, we apply a horizontal version of the filter to one bitmap, a vertical version to
another, and the formula pixel = sqrt(pixel1 * pixel1 + pixel2 * pixel2) to merge them together.
Hopefully you're familiar enough with the previous articles to know what the code would look like to do
this. The convolution masks look like this:

Sobell Prewitt Kirsh


1 2 1 1 1 1 5 5 5
0 0 0 0 0 0 -3 -3 -3
-1 -2 -1 /1+0 -1 -1 -1 /1+0 -3 -3 -3 /1+0

These filters perform the horizontal edge detect, rotating them 90 degrees gives us the vertical, and
then the merge takes place.
How do they work ?

Edge detection filters work essentially by looking for contrast in an image. This can be done a number
of different ways, the convolution filters do it by applying a negative weight on one edge, and a
positive on the other. This has the net effect of trending towards zero if the values are the same, and
trending upwards as contrast exists. This is precisely how our emboss filter worked, and using an
offset of 127 would again make these filters look similar to our previous embossing filter. The
following examples follow the different filter types in the same order as the filters above. The images
have a tooltip if you want to be sure which is which. These three filters also allow specification of a
threshold. Any value below this threshold will be clamped to it. For the test I have kept the threshold
at 0.

Horizontal and Vertical Edge Detection

To perform an edge detection operation in just the horizontal or vertical planes, we can again use a
convolution method. However, rather than use our framework for 3x3 filters, we are better off writing
the code from scratch so that our filter ( which will be a Prewitt filter ) will be either very wide, or very
high. I've chosen 7 as a good umber, our horizontal filter is 7x3 and our vertical filter is 3x7. The
code is not dissimilar enough from what we've already done to warrant showing it to you especially,
but it's there if you want to have a look. Following is the result first of our horizontal filter, and then
the vertical one.
There's more to life than convolution

Convolution filters can do some cool stuff, and if you did a search online, you'd be forgiven for thinking
that they are behind all image processing. However, it's probably more true that the sort of filters you
see in Photoshop as especially written to directly do what a convolution filter can only imitate. I'd
again point to the Photoshop embossing filter with it's range of options as evidence of this.

The problem with convolution for edge detection is not so much that the process is unsatisfactory, as
much as unnecessarily expensive. I'm going to cover two more methods of edge detection, which both
involve us iterating through the image directly and doing a number of compares on neighbouring
pixels, but which treat the resultant values differently to a convolution filter.

Homogenity Edge Detection

If we are to perceive an edge in an image, it follows that there is a change in colour between two
objects, for an edge to be apparent. To put it another way, if we were to take a pixel and store as it's
value the greatest difference between it's starting value and the values of it's eight neighbours, we
would come up with black where the pixels are the same, and trend towards white the harder the
colour difference was. We would detect the edges in the image. Furthermore, if we allowed a
threshold to be set, and set values below this to 0, we could eliminate soft edges to whatever degree
we desires. The code to do this is followed by an example at threshold 0 and one at threshold 127.

Collapse Copy Code


public static bool EdgeDetectHomogenity(Bitmap b, byte nThreshold)
{
// This one works by working out the greatest difference between a
// pixel and it's eight neighbours. The threshold allows softer edges to
// be forced down to black, use 0 to negate it's effect.
Bitmap b2 = (Bitmap) b.Clone();

// GDI+ still lies to us - the return format is BGR, NOT RGB.


BitmapData bmData = b.LockBits(new Rectangle(0, 0, b.Width, b.Height),
ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
BitmapData bmData2 = b2.LockBits(new Rectangle(0, 0, b.Width, b.Height),
ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);

int stride = bmData.Stride;


System.IntPtr Scan0 = bmData.Scan0;
System.IntPtr Scan02 = bmData2.Scan0;

unsafe
{
byte * p = (byte *)(void *)Scan0;
byte * p2 = (byte *)(void *)Scan02;

int nOffset = stride - b.Width*3;


int nWidth = b.Width * 3;

int nPixel = 0, nPixelMax = 0;


p += stride;
p2 += stride;

for(int y=1;y<b.Height-1;++y)
{
p += 3;
p2 += 3;

for(int x=3; x < nWidth-3; ++x )


{
nPixelMax = Math.Abs(p2[0] - (p2+stride-3)[0]);
nPixel = Math.Abs(p2[0] - (p2 + stride)[0]);
if (nPixel>nPixelMax) nPixelMax = nPixel;

nPixel = Math.Abs(p2[0] - (p2 + stride + 3)[0]);


if (nPixel>nPixelMax) nPixelMax = nPixel;

nPixel = Math.Abs(p2[0] - (p2 - stride)[0]);


if (nPixel>nPixelMax) nPixelMax = nPixel;

nPixel = Math.Abs(p2[0] - (p2 + stride)[0]);


if (nPixel>nPixelMax) nPixelMax = nPixel;

nPixel = Math.Abs(p2[0] - (p2 - stride - 3)[0]);


if (nPixel>nPixelMax) nPixelMax = nPixel;

nPixel = Math.Abs(p2[0] - (p2 - stride)[0]);


if (nPixel>nPixelMax) nPixelMax = nPixel;

nPixel = Math.Abs(p2[0] - (p2 - stride + 3)[0]);


if (nPixel>nPixelMax) nPixelMax = nPixel;

if (nPixelMax < nThreshold) nPixelMax = 0;

p[0] = (byte) nPixelMax;

++ p;
++ p2;
}

p += 3 + nOffset;
p2 += 3 + nOffset;
}
}

b.UnlockBits(bmData);
b2.UnlockBits(bmData2);

return true;

Difference Edge Detection


The difference edge detection works in a similar way, but it detects the difference between pairs of
pixel around the pixel we are setting. It works out the highest value from the difference of the four
pairs of pixels that can be used to form a line through the middle pixel. The threshold works the same
as the homogenity filter. Again, here is the code, followed by two examples, one with no threshold,
one with a threshold of 127.

Collapse Copy Code


public static bool EdgeDetectDifference(Bitmap b, byte nThreshold)
{
// This one works by working out the greatest difference between a
// pixel and it's eight neighbours. The threshold allows softer edges
// to be forced down to black, use 0 to negate it's effect.
Bitmap b2 = (Bitmap) b.Clone();

// GDI+ still lies to us - the return format is BGR, NOT RGB.


BitmapData bmData = b.LockBits(new Rectangle(0, 0, b.Width, b.Height),
ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
BitmapData bmData2 = b2.LockBits(new Rectangle(0, 0, b.Width, b.Height),
ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);

int stride = bmData.Stride;


System.IntPtr Scan0 = bmData.Scan0;
System.IntPtr Scan02 = bmData2.Scan0;

unsafe
{
byte * p = (byte *)(void *)Scan0;
byte * p2 = (byte *)(void *)Scan02;

int nOffset = stride - b.Width*3;


int nWidth = b.Width * 3;

int nPixel = 0, nPixelMax = 0;

p += stride;
p2 += stride;

for(int y=1;y<b.Height-1;++y)
{
p += 3;
p2 += 3;

for(int x=3; x < nWidth-3; ++x )


{
nPixelMax = Math.Abs((p2 - stride + 3)[0] - (p2+stride-3)[0]);
nPixel = Math.Abs((p2 + stride + 3)[0] - (p2 - stride - 3)[0]);
if (nPixel>nPixelMax) nPixelMax = nPixel;

nPixel = Math.Abs((p2 - stride)[0] - (p2 + stride)[0]);


if (nPixel>nPixelMax) nPixelMax = nPixel;

nPixel = Math.Abs((p2+3)[0] - (p2 - 3)[0]);


if (nPixel>nPixelMax) nPixelMax = nPixel;

if (nPixelMax < nThreshold) nPixelMax = 0;

p[0] = (byte) nPixelMax;

++ p;
++ p2;
}

p += 3 + nOffset;
p2 += 3 + nOffset;
}
}

b.UnlockBits(bmData);
b2.UnlockBits(bmData2);

return true;

}
Edge Enhancement

One thing we can use edge detection for is to enhance edges in an image. The concept is simple - we
apply an edge filter, but we only store the value we derive if it is greater than the value already
present. Therefore if we find an edge, we will brighten it. The end result is a filter which fattens the
outline of objects within it. We again apply a threshold, so that we can control how harsh an edge
must be before we enhance it. Again, I am going to give you the code, and an example of edge
enhancement with values of 0 and 127, but because the result is a bit harder to see, I'll also give you
the original image next to each for comparison. Don't worry, your browser cached the starting image,
so it won't slow the page down :-)

Collapse Copy Code


public static bool EdgeEnhance(Bitmap b, byte nThreshold)
{
// This one works by working out the greatest difference between a
// nPixel and it's eight neighbours. The threshold allows softer
// edges to be forced down to black, use 0 to negate it's effect.
Bitmap b2 = (Bitmap) b.Clone();

// GDI+ still lies to us - the return format is BGR, NOT RGB.


BitmapData bmData = b.LockBits(new Rectangle(0, 0, b.Width, b.Height),
ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
BitmapData bmData2 = b2.LockBits(new Rectangle(0, 0, b.Width, b.Height),
ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);

int stride = bmData.Stride;


System.IntPtr Scan0 = bmData.Scan0;
System.IntPtr Scan02 = bmData2.Scan0;

unsafe
{
byte * p = (byte *)(void *)Scan0;
byte * p2 = (byte *)(void *)Scan02;

int nOffset = stride - b.Width*3;


int nWidth = b.Width * 3;

int nPixel = 0, nPixelMax = 0;

p += stride;
p2 += stride;

for (int y = 1; y < b.Height-1; ++y)


{
p += 3;
p2 += 3;

for (int x = 3; x < nWidth-3; ++x)


{
nPixelMax = Math.Abs((p2 - stride + 3)[0] - (p2 + stride - 3)[0]);

nPixel = Math.Abs((p2 + stride + 3)[0] - (p2 - stride - 3)[0]);


if (nPixel > nPixelMax) nPixelMax = nPixel;

nPixel = Math.Abs((p2 - stride)[0] - (p2 + stride)[0]);

if (nPixel > nPixelMax) nPixelMax = nPixel;

nPixel = Math.Abs((p2 + 3)[0] - (p2 - 3)[0]);

if (nPixel > nPixelMax) nPixelMax = nPixel;

if (nPixelMax > nThreshold && nPixelMax > p[0])


p[0] = (byte) Math.Max(p[0], nPixelMax);

++ p;
++ p2;
}

p += nOffset + 3;
p2 += nOffset + 3;
}
}

b.UnlockBits(bmData);
b2.UnlockBits(bmData2);

return true;

I have to say the effect here is somewhat muted - I can't readily see it in the images but it is very
apparent when I have the program open and can swap between them.

I hope you've fond this article useful, I have a lot more to say on the subject of image processing, but
it's not going to be for a little while, as I have other articles I want to get done and also projects I need
to undertake to increase my skillset in areas pertaining to my work. But, to quote my favourite actor,
'I'll be back'
Image Processing for Dummies with C# and GDI+ Part 4 - Bilinear Filters and
Resizing
By Christian Graus

The fourth installment covers how to write a filter that resizes an image, and uses bilinear filtering

Here we go again...

Well, this is the fourth installment in the series, and I thank you for sticking around this long. I am
wanting to do some groundwork for a future article, which will be involved enough that I didn't want to
add bilinear filtering to the mix at that stage, so I'm covering it here. In a nutshell, bilinear filtering is
a method of increasing the accuracy with which we can select pixels, by allowing the selection of pixels
in between the ones we draw. Honest!!! To illustrate it's effect, we are going to write a resize filter
first, then we are going to add a bilinear filter to see the effect it has on the final result.

A resize filter.

If you want to resize an image arbitrarily, the easiest way to do it is to calculate a factor for the
difference between the source and destination in both x and y axes, then use that factor to figure out
which pixel on the source image maps to the colour being placed on the destination image. Note for
this filter I step through the destination image and calculate the source pixels from there, this ensures
that no pixels in the destination image are not filled.

SetPixel ?

Before I show you the code, you'll notice that I've chosen to use Set/GetPixel this time around
instead of getting a pointer to my bitmap data. This does two things for me, firstly it means my code
is not 'unsafe', and secondly, it makes the code a lot simpler, which will help when we add the bilinear
filter, which does enough work without my sample being cluttered by all the pointer lookup code that
would also be required, as you will see.

The code

Here then is my code for a function that resizes a bitmap, fills it with data from a copy that was made
prior, and then returns it. Note that unlike my other functions, I found I had to return the new
Bitmap because when I create one of a new size, it is no longer the same bitmap that is referred to
by the 'in' parameter, and therefore I am unable to return a bool to indicate success.

Collapse Copy Code


public static Bitmap Resize(Bitmap b, int nWidth, int nHeight, bool bBilinear)
{
Bitmap bTemp = (Bitmap)b.Clone();
b = new Bitmap(nWidth, nHeight, bTemp.PixelFormat);

double nXFactor = (double)bTemp.Width/(double)nWidth;


double nYFactor = (double)bTemp.Height/(double)nHeight;
if (bBilinear)
{
// Not yet 80)
}
else
{
for (int x = 0; x < b.Width; ++x)
for (int y = 0; y < b.Height; ++y)
b.SetPixel(x, y, bTemp.GetPixel((int)(Math.Floor(x * nXFactor)),
(int)(Math.Floor(y * nYFactor))));
}

return b;
}

In order to highlight the artifacts we get from such a filter, I have taken an image of Calvin and
increased the width while decreasing the height ( both by 10 pixels ) several times, to get the
following:

As you can see, things start to deteriorate fairly rapidly.

Bilinear Filtering

The problem we are having above is that we are not grabbing the pixels we want a lot of the time. If
we resize an image of 100 x 100 to 160 x 110, for example, then our X scale is 100/160, or .625. In
other words, to fill column 43, we need to look up column (43 * .625), or 26.875. Obviously, we are
not able to look up such a value, we will end up with column 27. In this case, the difference is slight,
but we can obviously end up with decimal values including .5, right in the middle between two existing
pixels. The image above shows how such small rounding of values can accumulate to cause image
quality to deteriorate. The solution is obviously to look up the values without rounding. How do we
look up a pixel that does not exist ? We interpolate it from the values we can look up. By reading the
values of all the surrounding pixels, and then weighting those values according to the decimal part of
the pixel value, we can construct the value of the sub pixel. For example, in the above example, we
would multiply the values of column 26 by .875, and the values of column 27 by .125 to find the exact
value required.
In order to make the example clearer, I have used GetPixel to read the four pixels in the area
surrounding the subpixel we want to find. In a future example I will use direct pixel access, which will
be faster, but also a lot more complex. The variable names have been chosen to help clarify what is
going on. Here is the missing code from above, the code which is executed when bBilinear =
true.

Collapse Copy Code


if (bBilinear)
{
double fraction_x, fraction_y, one_minus_x, one_minus_y;
int ceil_x, ceil_y, floor_x, floor_y;
Color c1 = new Color();
Color c2 = new Color();
Color c3 = new Color();
Color c4 = new Color();
byte red, green, blue;

byte b1, b2;

for (int x = 0; x < b.Width; ++x)


for (int y = 0; y < b.Height; ++y)
{
// Setup

floor_x = (int)Math.Floor(x * nXFactor);


floor_y = (int)Math.Floor(y * nYFactor);
ceil_x = floor_x + 1;
if (ceil_x >= bTemp.Width) ceil_x = floor_x;
ceil_y = floor_y + 1;
if (ceil_y >= bTemp.Height) ceil_y = floor_y;
fraction_x = x * nXFactor - floor_x;
fraction_y = y * nYFactor - floor_y;
one_minus_x = 1.0 - fraction_x;
one_minus_y = 1.0 - fraction_y;

c1 = bTemp.GetPixel(floor_x, floor_y);
c2 = bTemp.GetPixel(ceil_x, floor_y);
c3 = bTemp.GetPixel(floor_x, ceil_y);
c4 = bTemp.GetPixel(ceil_x, ceil_y);

// Blue
b1 = (byte)(one_minus_x * c1.B + fraction_x * c2.B);

b2 = (byte)(one_minus_x * c3.B + fraction_x * c4.B);

blue = (byte)(one_minus_y * (double)(b1) + fraction_y * (double)(b2));

// Green
b1 = (byte)(one_minus_x * c1.G + fraction_x * c2.G);

b2 = (byte)(one_minus_x * c3.G + fraction_x * c4.G);

green = (byte)(one_minus_y * (double)(b1) + fraction_y * (double)(b2));

// Red
b1 = (byte)(one_minus_x * c1.R + fraction_x * c2.R);

b2 = (byte)(one_minus_x * c3.R + fraction_x * c4.R);

red = (byte)(one_minus_y * (double)(b1) + fraction_y * (double)(b2));

b.SetPixel(x,y, System.Drawing.Color.FromArgb(255, red, green, blue));


}
}

The result is as follows:


As you can see, this is a much better result. It looks like it has gone through a softening filter, but it
looks much better than the one above. It is possible to get a slightly better code by going through a
much more complex process called bicubic filtering, but I do not intend to cover it, simply because I've
never done it.

What's Next

As I said above, the whole point of this article was to illustrate how bilinear filtering works. A bilinear
filter will be employed in my next article, which will talk about the process of writing a filter from
scratch to be optimised for a particular process instead of through the sort of generic processes we've
used so far. At that point we will reimpliment the bilinear filter to be more optimised, but hopefully
this version has helped you to understand that part of the process, so we can focus on other aspects in
the next article.
Image Processing for Dummies with C# and GDI+ Part 5 - Displacement ilters,
including swirl
By Christian Graus

In the fifth installment, we build a framework for generating filters that work by changing a pixel's location, rather than colour.

Introduction

Welcome again to my series on image processing. This time around I want to talk about displacement
filters. Most of the information you'll find about image processing is similar to the previous articles,
talking about changing an image by changing the colour values of pixels. Instead the filters we are
looking at today change an image by changing each pixels location. I got a lot of email for my last
article, asking why I bothered writing code to resize images. The answer was that the last article
explains bilinear filtering, a way of moving pixels so they are drawn to a theoretical location between
physical pixels. We will use that ability in this article, but I will not explain it, instead I recommend that
you review the prior article[^] if you are not familiar with bilinear filtering.

The framework

Once again we will start by implementing a frame work which we can use to create filters. Our basic
approach will be to create a two dimensional array of points. The array will be the size of the image,
and each point will store the new location for the pixel at that index. We will do this two ways, one that
stores a relative location, and one that stores an absolute location. Finally, we will create our own point
struct, which contains two doubles instead of ints, which we will use to write the implementation to
performs the bilinear filtering.

Arrays in C#

I must admit I had not done anything with 2D arrays in C# before this, and they are very cool. The
code looks like this:

Collapse Copy Code


Point [,] pt = new Point[nWidth, nHeight];
This creates a 2D array dynamically, and we can access the pixels using notation like pt[2, 3],
instead of the C++ pt[2][3]. Not only is this much neater than C++, but a Point[,] is a valid
parameter to pass into a function, making it a snap to pass around arrays of size unknown at compile
time.

Offset Filter

The first helper function we will write will take a relative location, so for example if we want to move
pixel 2, 4 to location 5, 2, then pt[2, 4] will equal 3, -2. We could use Set/GetPixel to do this, but
we will continue to use direct access, which is probably faster. As we must now span an arbitrary
number of rows to access pixels from anywhere in the image, we will do so by using the Stride
member of the BitmapData, which we can multiply by our Y value to get the number of rows down.
Then our X value is multiplied by 3, because we are using 3 bytes per pixel ( 24 bit ) as our format.
The code looks like this:

Collapse Copy Code


public static bool OffsetFilter(Bitmap b, Point[,] offset )
{
Bitmap bSrc = (Bitmap)b.Clone();

// GDI+ still lies to us - the return format is BGR, NOT RGB.


BitmapData bmData = b.LockBits(new Rectangle(0, 0, b.Width, b.Height),
ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
BitmapData bmSrc = bSrc.LockBits(new Rectangle(0, 0,
bSrc.Width, bSrc.Height),
ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);

int scanline = bmData.Stride;

System.IntPtr Scan0 = bmData.Scan0;


System.IntPtr SrcScan0 = bmSrc.Scan0;

unsafe
{
byte * p = (byte *)(void *)Scan0;
byte * pSrc = (byte *)(void *)SrcScan0;

int nOffset = bmData.Stride - b.Width*3;


int nWidth = b.Width;
int nHeight = b.Height;

int xOffset, yOffset;

for(int y=0;y < nHeight;++y)


{
for(int x=0; x < nWidth; ++x )
{
xOffset = offset[x,y].X;
yOffset = offset[x,y].Y;

p[0] = pSrc[((y+yOffset) * scanline) + ((x+xOffset) * 3)];


p[1] = pSrc[((y+yOffset) * scanline) + ((x+xOffset) * 3) + 1];
p[2] = pSrc[((y+yOffset) * scanline) + ((x+xOffset) * 3) + 2];

p += 3;
}
p += nOffset;
}
}

b.UnlockBits(bmData);
bSrc.UnlockBits(bmSrc);

return true;
}

You'll notice that the framework is there for a boolean success code, but it's not really used. The
OffsetFilterAbs does pretty much the same thing, except that if we want to move any pixel to location
3, 2, the point stored for that location will be 3, 2 and not an offset. OffsetFilterAntiAlias is much more
complex because it implements a bilinear filter, if you don't understand that code, refer to the previous
[^]article.

Now, the filters

The basic format then for all the filters is to create an array, populate it with values ( either offset or
absolute ) and then pass the bitmap and the array to the appropriate function. There is a lot of
trigonometry going on in quite a few of these, which I am not going to discuss in any great detail,
instead focusing on what the filter does, and it's parameters.

Flip

I guess the most obvious thing to do if we're going to move pixels around is flip the image. I'll show
the code for this one as it is a simple example, which will highlight the underlying process more so
than later examples such as swirl. The end result is obvious, so I won't slow your bandwidth with an
example.

Collapse Copy Code


public static bool Flip(Bitmap b, bool bHorz, bool bVert)
{
Point [,] ptFlip = new Point;

int nWidth = b.Width;


int nHeight = b.Height;

for (int x = 0; x < nWidth; ++x)


for (int y = 0; y < nHeight; ++y)
{
ptFlip[x, y].X = (bHorz) ? nWidth - (x+1) : x;
ptFlip[x,y].Y = (bVert) ? nHeight - (y + 1) : y;
}

OffsetFilterAbs(b, ptFlip);

return true;
}

RandomJitter

This filter takes a number and moves each pixel by a random amount that is within the bounds of that
number. This is surprisingly effective, doing it multiple times ends up with quite an effective oil
painting effect.
Swirl

This filter was my personal holy grail, and the impetus for my coming up with this stuff. Basically it
starts in the middle, and moves around in a circle, increasing the radius as it also increases the degree
of rotation. As a result of using trig, it benefits greatly from the bilinear filter which is an option. I will
show both the normal, then the bilinear filtered example for this image, then all others that offer the
filter, I will show with the filter on. The parameter that is passed in is a very small number, for the
example it is .05.
Sphere

The sphere filter is one example of a filter created through playing around. I was trying for the effect of
the image being wrapped around a ball. I don't think it works that well, but it is interesting and a
starting point for such an idea.

Time Warp

Another interesting filter, this one causes the image the warp as it disappears in the distance. The
example uses a factor of 15.
Moire

While playing with the swirl idea, I discovered that if I increased the rate at which the radius moved
out, I could either get a wide swirl, or with the right parameters, a moire effect was produced. The
example uses a factor of 3.
Water

A more useful filter is one that makes things appear to be underwater. This could be improved by the
addition of extra artifacts, such as rings and ripples. In effect this filter passes a sin wave through the
water in both x and y directions.

Pixellate

This is an example of a filter which can be done generically but would be better done with specific
code. Pixellation is a way of referring to the fact that when an image is enlarged, curves become
blocky. This filter provides a mosaic effect by creating blocks that are the same colour as their top left
corner, and can also draw lines to mark the individual tiles. A better implementation would use the
average colour present within the block in question, as opposed to the top left corner, but this still
works quite well.
Conclusion

The filters provided are designed to show some of the things you can do with a displacement
framework, and to provide a variety of samples from which you can derive your own filters. I hope you
find the examples useful, and the framework a good starting point for your own explorations of the
underlying concept. I hope next to demonstrate writing of a specific one-off filter, and to discuss how
this is always the most flexible approach, although transformation matrices and displacement
techniques are an excellent way of establishing rough ideas and implementing general concepts.

Updates

Version 1.01 : Added some bounds checking code in the three main filters so that the filters do not
crash if any values passed in are out of bounds. Some filters generate some out of bounds values on
the edges, and checking this way causes more values to be processed than creating a border around
all images.
I've been trying to "invert" the resulting image from your sphere filter. Instead of having it "pinch" out
and upwards towards the viewer, I'd like to make it "punch" downwards into the image.

My reason for this is that I'm obtaining images from webcam pointed at a reflecting dome, and I'd like
them to appear flattened out. The images from the cam naturally have the effect of your Sphere filter
due to the spherical shape of my reflector, and I just want to cancel it out. It's basically the same as
the Pinch Filter in Photoshop. Your Sphere filter represents a negative amount in the PS filter, and I
need to make it extend into the positive amount.

I saw this Pinch/Unbarrel technique used with Bomberman Evolved (YouTube:


http://www.youtube.com/watch?v=49rDCWO-ui8[^]) and it works great.

If someone (the author, or otherwise) could help me determine what line(s) of code need to be
changed in the file "Filters.cs" inside the function "public static bool Sphere(Bitmap b, bool
bSmoothing)" I would be most grateful. I don't fully understand the logic for determining values of
newX and newY and I've already tried numerous changes to no avail.

Just to make sure I'm clear, here are some pictures...

My images currently look like they have Barrel Distortion, which is the same as applying the Sphere
filter
http://www.umich.edu/~lowbrows/guide/opticaljargon8.gif[^]

And I want to apply a Pinch or Pincushion to it, like this


http://www.umich.edu/~lowbrows/guide/opticaljargon9.gif[^]

Thank you!
How can i filter a 24bit RGB BMP.file in a way where pixels almost red convert into a complete red
pixels, pixels almost blue convert into complete blue pixels, and pixels almost green convert into a
complete green pixels. When i mean a pixel is almost a color it means that allows the colum of the
color to be(255-10%) and the other two colums to be 0+10%) and (0+10%) what in numbers means
(255-25) (25) (25) what would be then (230) (25) (25). The output file must be 24 bit RGB bmp.

Thanks a lot.
Basically, use my code as a template, and then check each pixel value, and move it to 255 if it's close
enough.

Christian Graus - Microsoft MVP - C++


Hi christian. Thanks for helping me with the GrayScale information last week. I was hoping you could
help me with writing an array of pixel values toa file. Each element only holding one value. I have tried
using this:
//FileStream stmPixels = new FileStream("grayPixels.txt", FileMode.Create, FileAccess.Write,
FileShare.None);
//StreamWriter stmWriter = new StreamWriter(stmPixels);
//stmWriter.WriteLine(pixelArray[x].ToString());
//stmWriter.Flush();
//stmPixels.Flush();
//stmPixels.Close();
But it does't seem to work, what am I doing wrong.
Timothy
But it does't seem to work, what am I doing wrong.

Your code is all commented out

Timothy Abrahams wrote:


//stmWriter.WriteLine(pixelArray[x].ToString());

This should be in a loop, but the rest of the code needs to be outside it, so the file is created once and
then written to for all values.

Christian Graus - Microsoft MVP - C++


Image Processing for Dummies with C# and GDI+ Part 6 - The HSL color space
By Christian Graus

A discussion of the HSL color space, including code for a color picker and image filters

Introduction

Well, it's been quite a while since I wrote one of these. As work is a little slow at the moment, I
thought I'd do one on an important topic, that of color. An important caveat - I am color blind. So
when I talk about how the colors work, I'm largely taking other people's word for it.

Background (optional)

I'm sure most of us are aware that when you need to specify a color to your PC, you do it with an RGB
triple, or ARGB if you want to specify transparency. What this in essence means is that your CRT has
three color guns, and your LCD has sets of three colored lights to make up a pixel. The merging of
differing levels of these three colors equate to the range of colors that can be displayed by your
computer, like this:

This is, however, not the only possible way to describe color, nor is it a method that makes much
sense to humans. If I were to ask you how to describe orange, or yellow, or purple, using RGB,
chances are you'd have to undergo some trial and error to work it out. HSL is the most common color
system that exists to be human friendly, rather than machine friendly.
As RGB stands for red, green, blue, so to, HSL is an acronym, in this case for hue, saturation,
luminance. The three components of color are best specified in that order, as they represent a constant
refining of the value ( that is, saturation and luminance values are close to meaningless without a hue
).

Hue

The hue is the actual base color being used, free of any modification to brightness or strength. It is
commonly represented as a circle, in which the hue value ( which ranges from 0 to 360 ) indicates the
angle in degrees of the color as present on the wheel. The following image shows the hue circle, with
constant luminance and saturation at 50%.

Saturation

Saturation describes how 'colorful' a color is, for example, a fluorescent color would have a high
saturation. In order to demonstrate this, I have provided three screenshots, all of the hue wheel with 0
luminance, and with .25, .5 and .75 saturation. A saturation of 0 makes an image greyscale ( as it has
no color in it ), and so I don't provide an image at 100, because I wanted the range shown to be even.

Luminance

Luminance describes how bright a color is, so that full luminance is always white, and no luminance is
always black. In order to demonstrate this, I have provided three screenshots, all of the hue wheel
with 0 luminance, and with .25, .5 and .75 saturation.
The color chart

The sample application continues to build on previous installments, and thus builds the code base for
use in other projects. There is now a new menu called 'colorspaces', with a view to expanding it to
cover other color spaces in the future. The 'HSL Chart' menu item brings up a dialog with a hue circle,
and a slider on the side, like this:

The hue wheel either modifies saturation, or luminance from the centre to the outside, and the slider
then modifies luminance or saturation accordingly. This should give you a really good idea of exactly
how these parameters work, and how they modify colors.

Using the code

So, we have this color space, but what do we do with it ? Well, two things pop immediately to mind.
First, we can provide means so that a user can select colors using this color space, instead of having to
provide an RGB triple. Secondly, we can provide image filters that allow modification of an image
based on these three values. But in order to do any of this ( or even to do what you've seen already )
we need to be able to move within this color space, we need to be able to convert between HLS and
RGB. In order to do this, the first component we will examine is the HLS class.

HLS class
All the new code is in the ColorSpace.cs file. The first class in there is called YUV, another color system
that I wrote a class for, but do not examine here. Next is the HLS class, which encapsulates a HLS
color. It keeps these values in private members and exposes them through properties, so that we can
correct out of bounds values. I choose not to throw an exception, because when we use this class with
filters, we will amost certainly pass in an out of bounds value.

Collapse Copy Code


private float h;
private float s;
private float l;

public float Hue


{
get
{
return h;
}
set
{
// Note that we don't just clamp, as 365 degrees, for
// example, is 5 degrees plus a full turn.
h = (float)(Math.Abs(value)%360);
}
}

public float Saturation


{
get
{
return s;
}
set
{
s = (float)Math.Max(Math.Min(1.0, value), 0.0);
}
}

public float Luminance


{
get
{
return l;
}
set
{
l = (float)Math.Max(Math.Min(1.0, value), 0.0);
}
}

The constructor with no arguments is private so that we can't construct an HLS object without
specifying it's values. We also provide a property called RGB, which returns a Color that maps to the
current HLS values.

In addition, two static methods are provided, which return an HSL object from either a Color, or
specified red, green and blue values. Our filters will use these static methods to build an HLS object,
then modify one of these values before requesting the modifed colors.

Color Picker

Most HSL color pickers present a hue wheel with varying saturation from the centre to the edge, and
then a slider to set luminance for the chosen hue/saturation combination. I don't like this format,
because naturally values towards the centre of the circle are underrepresented, and harder to pick.
Instead, I propose a system of three sliders, one each for hue, saturation and luminance. Saturation
and luminance on the hue slider are set to .5, as is luminance on the saturation slider. Therefore, it's
intuitive to move from left to right and select a color.
As you can see, text boxes are provided as well as sliders, the selected color is shown on the right, and
it's RGB values are also displayed. The test application will remember the selected color and initialise
the dialog with that color when OK is pressed. The HSLColorPicker has a SelectedColor property which
can be set before displaying the dialog, and which returns the selected color after the dialog is closed.
It returns a Color rather than a HSL object, but can easily be changed if desired.

Image filters

Three image filters are provided, one each for hue, saturation and luminance. The filters take a float
and multiply the value being filtered by that number, so that 1 is an identity transform. This causes all
values to trend evenly, but has the side effect of stopping values of 0 from changing at all. There are
numerous ways around this, including adding a small number to values before multiplication, or
accepting a value to add as well as one to multiply by. The hue filter is kind of odd, give the nature of
the hue wheel, it simply changes the colors to unrelated values. The saturation and luminance filters
are, however, quite useful and worth incorperating into any image processing library.

As always, I present my son as a model for my filters. From top to bottom is the normal image, the
hue filter, the saturation filter, and the luminance filter. I've tried to use extreme values to exaggerate
the effect to make it obvious. The saturation effect in particular is not that obvious, because his car is
fluorecently colored anyhow.
Conclusion

There are numerous ways to represent color, in this article I have focused on one way that is
commonly used in paint programs and so on, and which translates easily to human understanding. This
means both that it's a good way of asking people to select a color, and that filtering by enhancing or
suppressing these values will result in an effect that has uniform meaning to the human eye. Any
person who needs to ask a user to select a color should consider using HSL as the means of doing so.

Vous aimerez peut-être aussi