Image | Image Segmentation | Computer Vision

Dr.B.A.T.

University, Lonere

Image Cartoonize with Stegnography

CHAPTER 1 MULTIPASS BILATERAL FILTER – CARTOONIZER
1.1Introduction
This application applies Multipass Bilateral Filter to colour Images. Bilateral filtering is an Edge-preserving smoothing filter. This technique extends the concept of Gaussian smoothing by weighting the filter coefficients with their corresponding relative pixel intensities. Pixels that are very different in intensity from the central pixel are weighted less even though they may be in close proximity to the central pixel. This is effectively a convolution with a non-linear Gaussian filter, with weights based on pixel intensities. This is applied as two Gaussian filters at a localized pixel neighbourhood , one in the spatial domain, and one in the intensity domain.

1.1.1 Changes
V6 – Better Contour Algorithm; Luminance segmentation; Bilateral Filter optimization; Interface look.

1.1.2 Cartoon
To obtain cartoon-like results, these new effects have been implemented:

1.1.3 Contour
Contour, when used, is applied to the Bilateral Filter Output in the CieLAB colour space. It has 2 parameters: General Amount, and how much it is based on Luminance (L) or Hue (AB). It's created with a Sobel filter applied in separated channels: “L” and “A & B” (of the CieLAB colourspace).

Dept of Information Technology

1

To each segment it is assigned a value of Luminance given by the weighted average of the Histogram segment values. Then this histogram is divided by N segments (in a way so that in each segment there's the same number of intensities values). (Like contour this works with CieLAB colour space.) 1. Dept of Information Technology 2 .A.T.University. (Fig.3 Algorithm: It's built an histogram of the Luminance channel (L) of Bilateral filtered picture.Dr. The best way to have an idea of how this work is to tweak these parameters and to watch the output results.B. (No Segmentation applied) 25% Presence means that the Luminance of the output picture is given by 25% of the Segmented Luminance and by 75% of the bilateral filter output luminance and so on 100% Presence means that the Luminance of the output picture is fully given by Segmented Luminance. Note that we are talking only about Luminance values. Example Yellow lines) Then these segments are merged with original “bilateral output” by a value called “Presence”. Lonere Image Cartoonize with Stegnography 1.2 Luminance Segmentation This is a nice way to obtain a more cartoon-like output. 0% Presence means that the bilateral filter output luminance has no variation.

before applying “Bilateral Filter” it's possible to apply a pre-Effect. Lonere Image Cartoonize with Stegnography 1.B.Exposure Control 3. After the “Bilateral filter” there are 4 possible paths to follow: 1.2 Add a Contour (red) 0-1-2-5-6 [1-2] A contour filter is applied to the Bilateral Output [1-5] Contour is applied to the bilateral output – To follow this path set Contour Amount > 0 and Segmentation Presence = 0 1.University.T. Auto Histogram Equalize 2.A.3 Luminance Segmentation (blue) 0-1-3-4-6 [1-3] A “Luminance Segmentation” is applied to the Bilateral Output.4 Flow As we can see.Dr.Brightness.4. – To follow this path set Contour Amount = 0 and Segmentation Presence = 0 1. Contrast & Saturation. The pre-Effext can be one of this: 1.4. (The Number of Segments is customizable) Dept of Information Technology 3 .4.1 Pure Bilateral (black) 0-1-6 The Output Picture is the Output of Bilateral.

5 Pre-EFFECT As seen above it can be: Auto Equalize Exposure BCS Brightness. (The Number of Segments is customizable) [1-4] We merge the “Luminance Segmented” picture with the bilateral Output.1 Colour Space This is the Colour space the bilateral filter will be applied to. – To follow this path set Contour Amount = 0 and Segmentation Presence > 0.T.4 CARTOON-like: Contour and Luminance Segmentation (green) 0-1-2-3-4-5-6 [1-2] A contour filter is applied to the Bilateral Output [1-3] A “Luminance Segmentation” is applied to the Bilateral Output. make a copy.Green and Blue.txt” and files “LastSettingsEX. [4-5] Contour is applied to the output of above merging. -CieLAB. 1.6 BILATERAL FILTER 1.2 Radius Radius of Intensity and Spatial Domains. Lonere Image Cartoonize with Stegnography [1-4] The Output is the result of the merging of the “Luminance Segmented” picture with the bilateral Output. Saturation 1. This merge is customizable with “Presence %” slidebar. R=1 means that for each pixel the computed pixel will be the result of ((R*2)+1)^2 = 9 neighborhood pixels R=2 means that for each pixel the computed pixel will be the result of ((R*2)+1)^2 = 25 neighborhood pixels Dept of Information Technology 4 . -RGB Bilateral filter is computed from all channels Red.Dr.B.6. – To follow this path set Contour Amount > 0 and Segmentation Presence greater > 0.4.5 Parameters Details Parameters are saved in the files “LastSettings.6.txt” for easy reading.A. -- 1.4. 1.University. Contrast. This merge is customizable with “Presence %” slidebar. 1. Not to loose them. Here the bilateral filter is computed only from the Luminance (L) channel of the CieLab Colour Space faster.

6 Iterations How many times Bilateral Filter is applied.A. 1.8 LUMINANCE SEGMENTATION Segments Number of Luminance Segments Samples This are good settings for cartoon-like results PreEFFECT: Brightness:110 Dept of Information Technology 5 .6.B.6.3 Intensity Mode This is the type of decay curve for the intensity space.6. it can be 0 Gaussian1 1 Gaussian2 2 InvProportional 3 Linear 1.Dr.University.5 Spatial Sigma This is how fast the Spatial space decrease going far from central pixel. Lonere Image Cartoonize with Stegnography R=3 means that for each pixel the computed pixel will be the result of ((R*2)+1)^2 = 49 neighborhood pixels R=4 means that for each pixel the computed pixel will be the result of ((R*2)+1)^2 = 81 neighborhood pixels R=5 means that for each pixel the computed pixel will be the result of ((R*2)+1)^2 = 121 neighborhood pixels 1.6. 1.4 Intensity Sigma This is how fast the intensity space decrease 1. 1.T.7 CONTOUR Amount Contour Darkness Lum/Hue How much the contour is based on Luminance or Hue.

University.A.Dr. Lonere Contrast :0 Saturation:100 Image Cartoonize with Stegnography BILATERAL FILTER: Colour Space: CieLAB Radius: 5 Intensity Mode : 1 Gaussian2 Intensity Sigma: 0.B.T.355 Spatial Sigma: 1000 Iterations: 3 CONTOUR: Amount : 100 Lum/Hue: .5 LUMINANCE SEGMENTATION: Segments: 4 Presence: 60% Dept of Information Technology 6 .

B. However. This characteristic implies that the ability to formulate approaches &quickly prototype candidate solutions generally plays a major role in reducing the cost & time required to arrive at a viable system implementation. unlike humans. They can operate also on images generated by sources that humans are not accustomed to associating with image. y & the amplitude values of f are all finite discrete quantities. Vision is the most advanced of our sensor.1.T. Dept of Information Technology 7 . each of which has a particular location & value.University. When x.1. who are limited to the visual band of the EM spectrum imaging machines cover almost the entire EM spectrum.Dr. & the amplitude of f at any pair of coordinates (x. The elements are called pixels.2 What is DIP? An image may be defined as a two-dimensional function f(x. y). This is limiting & somewhat artificial boundary. y) is called the intensity or gray level of the image at that point. where x & y are spatial coordinates. An important characteristic underlying the design of image processing systems is the significant level of testing & experimentation that normally is required before arriving at an acceptable solution. Lonere Image Cartoonize with Stegnography CHAPTER 2 COLOURIZATION USING OPTIMIZATION 2. so it is not surprising that image play the single most important role in human perception. There is no general agreement among authors regarding where image processing stops & other related areas such as image analysis& computer vision start.1 Digital Image Processing 2. Sometimes a distinction is made by defining image processing as a discipline in which both the input & output at a process are images. Digital image is composed of a finite number of elements. ranging from gamma to radio waves. 2.1 Background: Digital image processing is an area characterized by the need for extensive experimental work to establish the viability of proposed solutions to a given problem.A. The area of image analysis (image understanding) is in between image processing & computer vision. The field of DIP refers to processing digital image by means of digital computer. we call the image a digital image.

& high-level processes.B. description of that object to reduce them to a form suitable for computer processing & classification of individual objects.level processing involves “Making sense” of an ensemble of recognized objects.A. Digital image processing. A mid-level process is characterized by the fact that its inputs generally are images but its outputs are attributes extracted from those images. y) is called the intensity of the image at that point. b]I: [0. as already defined is used successfully in a broad range of areas of exceptional social & economic value.T. one useful paradigm is to consider three types of computerized processes in this continuum: low-. I(x. Lonere Image Cartoonize with Stegnography There are no clear-cut boundaries in the continuum from image processing at one end to complete vision at the other. I (xylem) takes non-negative values assume the image is bounded by a rectangle [0. contrast enhancement & image sharpening.University. mid-. a] [0. y) is the intensity of the image at the point (x. b]  [0. 1 Gray scale image: A grayscale image is a function I (xylem) of the two spatial coordinates of the image plane. y) on the image plane. as in image analysis & at the far end of the continuum performing the cognitive functions normally associated with human vision. a]  [0.Dr. Finally higher. info) Dept of Information Technology 8 .1 What is an image? An image is represented as a two dimensional function f(x. Low-level process involves primitive operations such as image processing to reduce noise. y) where x and y are spatial co-ordinates and the amplitude of „f‟ at any pair of coordinates (x. 2. Mid-level process on images involves tasks such as segmentation.level process is characterized by the fact that both its inputs & outputs are images. A low. However.

1: Given a grayscale image marked with some colour scribbles by the user (left). G (xylem) for green and B (xylem) for blue.B. our algorithm produces a colourized image (middle).University. An image may be continuous with respect to the x and y coordinates and also in amplitude.T. Converting such an image to digital form requires that the coordinates as well as the amplitude to be digitized. Figure 2.Dr. R (xylem) for red. Digitizing the amplitude values is called quantization.A. Dept of Information Technology 9 . Lonere 2 Color image: Image Cartoonize with Stegnography It can be represented by three functions. Digitizing the coordinate‟s values is called sampling.

as was succinctly pointed out by Earl Glick1 in 1984: . in order to colourize a still image an Dept of Information Technology 10 . and the indicated colours are automatically propagated in both space and time to produce a fully colourized image or sequence.A. Neither of these tasks can be performed reliably in practice. as evidenced by multiple colourization tutorials on the World Wide Web . while the financial incentives are substantial.4. and expensive task. recolouring.2 Introduction Colourization is a term introduced by Wilson Markle in 1970 to describe the computer-assisted process he invented for adding colour to black and white movies or TV programs [Burns].B. there are still massive amounts of black and white television shows that could be colourized: the artistic controversy is often irrelevant here. however. For example. We formalize this premise using a quadratic cost function and obtain an optimization problem that can be solved efficiently using standard techniques. time-consuming.A major difficulty with colourization. consequently. We demonstrate that high quality colourizations of stills and movie clips may be obtained from a relatively modest amount of user input.University.You couldn't make Wyatt Earp today for $1 million an episode. Colourization of classic motion pictures has generated much controversy which partially accounts for the fact that not many of these movies have been colourized to date. segmentation 2. colourization requires considerable user intervention and remains a tedious. But for $50.In this paper we present a simple colourization method that requires neither precise image segmentation. CR Categories: I. The process typically involves segmenting images into regions and tracking these regions across image sequences.000 a segment. Colourization of still images also appears to be a topic of considerable interest among users of image editing software. However. The term is now used generically to describe any technique for adding colour to monochrome stills and footage. Lonere Image Cartoonize with Stegnography Abstract: Colourization is a computer-assisted process of adding colour to a monochrome image or movie. Our method is based on a simple premise: neighboring pixels in space-time that have similar intensities should have similar colours.T. lies in the fact that it is an expensive and time-consuming process.9 [Image Processing and Computer Vision]: Keywords: colourization. nor accurate region tracking. you can turn it into colour and have a brand new series with no residuals to pay. In our approach an artist only needs to annotate the image with a few colour scribbles.Dr.

Using these user supplied constraints our technique automatically propagates colours to the remaining pixels in the image sequence. This assumption leads to an optimization problem that can be solved efficiently using standard techniques. In this paper we describe a new interactive colourization technique that requires neither precise manual segmentation. The underlying algorithm is based on the simple premise that nearby pixels in space-time that have similar gray levels should also have similar colours. In addition to colourization of black and white images and movies.A. The user indicates how each region should be coloured by scribbling the desired colour in the interior of the region.Dr. Motion detection and tracking is then applied. instead of tracing out its precise boundary. such as the boundary between a subject's hair and her face. Lonere Image Cartoonize with Stegnography artist typically begins by segmenting the image into regions.B. is a new simple yet surprisingly effective interactive colourization technique that drastically reduces the amount of input required from the user.T. This colourization process is demonstrated in Figure 1. the artist is often left with the task of manually delineating complicated boundaries between regions. an extremely useful operation in digital photography and in special effects. Colourization of movies requires. allowing colours to be automatically assigned to other frames in regions where no motion occurs. Our contribution. which often requires manual fixing by the operator. Existing tracking algorithms typically fail to robustly track non-rigid regions. Dept of Information Technology 11 . 2. and then proceeds to assign a colour to each region. Although not much is publicly known about the techniques used in more contemporary colourization systems used in the industry. The technique is based on a unified framework applicable to both still images and image sequences. again requiring massive user intervention in the process. there are indications that these systems still rely on defining regions and tracking them between the frames of a shot. in addition. automatic segmentation algorithms often fail to correctly identify fuzzy or complex region boundaries. our technique is also applicable to selective recolouring. Thus. tracking regions across the frames of a shot. thus.University. Colours in the vicinity of moving edges are assigned using optical flow. Unfortunately.3 Previous work: In Markle's original colourization process [Markle and Hunt 1987] a colour mask is manually painted for at least one reference frame in a shot. nor accurate tracking.

large when Y(r) is similar to Y(s). Thus. y. t) triplets. To simplify notation we will use boldface letters (e. a commercial software for colourizing still images. describe a semi-automatic technique for colourizing a grayscale image by transferring colour from a reference colour image. which we will refer to simply as intensity. The algorithm is given as input an intensity volume Y(x.B. We Dept of Information Technology 12 . Lonere Image Cartoonize with Stegnography Black Magic. In contrast. Does not explicitly enforce spatial continuity of the colours.University. 2. t) and V(x. y.A. but the segmentation task is left entirely to the user. As mentioned in the introduction. While this technique has produced some impressive results. Similar weighting functions are used extensively in image segmentation algorithms where they are usually referred to as affinity functions. where Y is the monochromatic luminance channel. we wish to minimize the difference between the colour U(r) at pixel r and the weighted average of the colours at neighbouring pixels: where wrs is a weighting function that sums to one. and small when the two intensities are different. s should have similar colours if their intensities are similar. while U and V are the chrominance channels. t) and outputs two colour volumes U(x. in our technique the artist chooses the colours directly. s) to denote (x. provides the user with useful brushes and colour palettes.T. y. In other cases. we wish to impose the constraint that two neighbouring pixels r. y. Y(r) is the intensity of a particular pixel. r. and in some images it may assign vastly different colours to neighbouring pixels that have similar intensities.4 Algorithm: We work in YUV colour space. commonly used in video. This technique works well on images where differently coloured regions give rise to distinct luminance clusters. Thus. the user must direct the search for matching pixels by specifying swatches indicating corresponding regions in the two images. and is able to retune the results by scribbling more colour where necessary. note that the artistic control over the outcome is quite indirect: the artist must and reference images containing the desired colours over regions with similar textures to those that she wishes to colourize. It is also difficult to retune the outcome selectively in problematic areas. They examine the luminance values in the neighbourhood of each pixel in the target image and transfer the colour from pixels with matching neighbourhoods in the reference image.Dr. encoding the colour [Jack 2001].g. or possess distinct textures. t).

More formally. y1. Formally. y) denote the optical flow calculated at time t. t) is a neighbour of pixel (x1.bi are the same for all pixels in a small neighbourhood around r. let vx(x. J(V) subject to these constraints. y). Note that the optical flow is only used to define the neighbourhood of each pixel.bi variables yields an equation equivalent to equation 1 with a correlation based affinity function. In a single frame. t +1) if: The flow field vx(x0). Then the pixel (x0. after accounting for motion. we define two pixels as neighbours if their image locations.B. While this model adds to the system a pair of variables per each image window. The notation r belongs N(s) denotes the fact that r and s are neighbouring pixels. v(ri) = vi we minimize J(U). we define two pixels as neighbours if their image locations are nearby. this Dept of Information Technology 13 . vy(x. Between two successive frames. it assumes that the colour at a pixel U(r) is a linear function of the intensity Y(r): U(r) = aiY(r)+bi and the linear coefficients ai. The simplest one is commonly used by image segmentation algorithms and is based on the squared difference between the two intensities: A second weighting function is based on the normalized correlation between the two intensities: where mr and sr are the mean and variance of the intensities in a window around r. This assumption can be justified empirically and intuitively it means that when the intensity is constant the colour should be constant. are nearby. Since the cost functions are quadratic and the constraints are linear.T.Dr. vy(y0) is calculated using a standard motion estimation algorithm [Lucas and Kanade 1981]. Lonere Image Cartoonize with Stegnography have experimented with two weighting functions. Now given a set of locations ri where the colours are specified by the user u(ri) = ui. not to propagate colours through time. y0. a simple elimination of the ai.University.A. and when the intensity is an edge the colour should also be an edge (although the values on the two sides of the edge can be any two numbers). The correlation affinity can also be derived from assuming a local linear relation between colour and intensity.

University. the r. sparse system of linear equations. the quadratic form minimized by normalized cuts is exactly our cost function J. In image denoising algorithms based on anisotropic diffusion one often minimizes a function similar to equation 1. which may be solved using a number of standard methods.T.B. s entry of the matrix is wrs) and D is a diagonal matrix whose diagonal elements are the sum of the affinities (in our case this is always 1).e. Dept of Information Technology 14 .. Our algorithm is closely related to algorithms proposed for other tasks in image processing. but the function is applied to the image intensity as well.Dr. The second smallest eigenvector of any symmetric matrix A is a unit norm vector x that minimizes xTAx and is orthogonal to the first eigenvector. one attempts to find the second smallest eigenvector of the matrix D -W where W is a n pixels matrix whose elements are the pair wise affinities between pixels (i. our algorithm minimizes the same cost function but under different constraints. By direct inspection. In image segmentation algorithms based on normalized cuts [Shi and Malik 1997]. Lonere Image Cartoonize with Stegnography optimization problem yields a large.A. Thus. that is xT(D-W)x = J(x).

The original clips were obviously in black and white. Note that unlike global colourmap manipulations.University. Lonere Image Cartoonize with Stegnography 2.5 RESULTS: The results shown here were all obtained using the correlation based window (equation 3. Figure 2 shows some still gray scale images marked by the user's colour scribbles next to the corresponding colourization results. Figures 5 and 6 show selected frames from colourized movie clips. Specifically. Figure 3 demonstrates such a progression on a still image. In figure 7 the alternative method is one where the image is first Dept of Information Technology 15 . All other colours are automatically determined by the optimization process. we used the original colour channels of each image when picking the colours. we minimize the cost (equation 1) under two groups of constraints. Visually similar results were also obtained with the Gaussian window (equation 2). Figure 4 shows how our technique can be applied to recolouring. and for the movie sequences we used a multigrid solver. The mean and variance m. Typically. Since automating the choice of colours was not our goal in this work.B. Our technique is then used to propagate the green colour until an intensity boundary is found. the colour should be the same as the original colour. Using the multigrid solver. First. for pixels outside the mask. In this application the affinity between pixels is based not only on similarity of their intensities. our algorithm does not recolour the other orange in the image. the artist may want to start with a small number of colour scribbles. very convincing results are generated by our algorithm even from a relatively small number of colour scribbles. the final colour should be the colour of the scribble. for pixels covered by the user's scribbles.I Love Lucy. The bottom row of the figure shows another example. Even though the total number of colour scribbles is quite modest.A.T. The threshold T in equation 4 was set to 1 so that the window used was 3*3*3. the run time was approximately 15 seconds per frame.s for each pixel were calculated by giving more weight to pixels with similar intensities. and then _ne-tune the colourization results by adding more scribbles. and from Chaplin's classic movie Modern Times. As can be seen. For still images we used Matlab's built in least squares solver for sparse linear systems. Figures 7 and 8 compare our method to two alternative methods. To change the colour of an orange in the top left image to green. so in these examples we did not have a colour reference to pick the colours from. the artist first defines a rough mask around it and then scribbles inside the orange using the desired colour. since colours are not propagated across intensity boundaries. the resulting colourization is surprisingly convincing. or equivalently using the local linearity assumption). We have also successfully colourized several short clips from the television show . Second. but also on the similarity of their colours in the original image.Dr.

the colourization achieved with this alternative method (figure 7b) is noticeably worse than the one computed by our method (figure 7c). either using automatic segmentation or using tracking to propagate colours across time. Segmentation is a very difficult problem and even state-of-the-art methods may fail to automatically delineate all the correct boundaries. such as the intricate boundary between the hair and the forehead. In both cases. In other words. Consequently. if dense optical flow had been perfect then propagating colours from a single frame would have also worked perfectly. Figure 8 compares our method for colourizing image sequences to an alternative method where a single frame is colourized and then optical flow tracking is used to propagate the colours across time. An advantage of our optimization framework is that we use segmentation cues and optical low as hints. it is much more robust to tracking failures. Distinctive colours were deliberately chosen so that flaws in the colourization would be more apparent. Figure 7a shows the result of automatic segmentation computed using a version of the normalized cuts algorithm [Shi and Malik 1997].University. state-of-the-art algorithms still do not work perfectly in an automatic fashion. the same colour scribbles were used. if the automatic segmentation had been perfect then flood filling segments would have produced perfect results. the results could be improved using more sophisticated algorithms. Lonere Image Cartoonize with Stegnography segmented automatically and then the scribbled colours are used to “flood fill” each segment.T.B. Dept of Information Technology 16 .A. for the correct colourization but the colourization can be quite good even when these hints are wrong. In both cases. Yet despite many years of research in computer vision. Likewise. or the low contrast boundary between the lips and the face. Since our method uses optical flow only to define the local neighbourhood.Dr.

T.A.Dr. Lonere Image Cartoonize with Stegnography Figure 2: Still image colorization examples.B. Top row: the input black-white image with scribbled colors.University. Bottom row: resulting color image (a1) (b1) (c1) (a2) (b2) (c2) Dept of Information Technology 17 .

A. some colour is bleeding from the cyan pacifier onto the wall behind it. (a) Segmented image. Next. yielding the final result (c2).University. For visualization purposes disctinctive colors were used. Also. Note that the table cloth gets the same pink colour as the girl's dress. Note that it was not necessary to mark each and every bead (a) (b) (c) Figure 4: A comparison with automatic segmentation. Dept of Information Technology 18 . the artist decides to change the colour of the beads by sprinkling a few red pixels (c1). (b) Result of colouring each segment with a constant color. Segmenting fuzzy hair boundary is a difficult task for typical segmentation methods.Dr. By adding colour scribbles on the table cloth and on the wall (b1) these problems are eliminated (b2).T. which yield the result in (a2). The artist begins with the scribbles shown in (a1). (c) Our result.B. Lonere Image Cartoonize with Stegnography Figure 3: Progressively improving a colorization.

Sign up to vote on this title
UsefulNot useful