University, Lonere

Image Cartoonize with Stegnography

This application applies Multipass Bilateral Filter to colour Images. Bilateral filtering is an Edge-preserving smoothing filter. This technique extends the concept of Gaussian smoothing by weighting the filter coefficients with their corresponding relative pixel intensities. Pixels that are very different in intensity from the central pixel are weighted less even though they may be in close proximity to the central pixel. This is effectively a convolution with a non-linear Gaussian filter, with weights based on pixel intensities. This is applied as two Gaussian filters at a localized pixel neighbourhood , one in the spatial domain, and one in the intensity domain.

1.1.1 Changes
V6 – Better Contour Algorithm; Luminance segmentation; Bilateral Filter optimization; Interface look.

1.1.2 Cartoon
To obtain cartoon-like results, these new effects have been implemented:

1.1.3 Contour
Contour, when used, is applied to the Bilateral Filter Output in the CieLAB colour space. It has 2 parameters: General Amount, and how much it is based on Luminance (L) or Hue (AB). It's created with a Sobel filter applied in separated channels: “L” and “A & B” (of the CieLAB colourspace).

Dept of Information Technology


3 Algorithm: It's built an histogram of the Luminance channel (L) of Bilateral filtered picture.2 Luminance Segmentation This is a nice way to obtain a more cartoon-like output. Then this histogram is divided by N segments (in a way so that in each segment there's the same number of intensities values). The best way to have an idea of how this work is to tweak these parameters and to watch the output results. (No Segmentation applied) 25% Presence means that the Luminance of the output picture is given by 25% of the Segmented Luminance and by 75% of the bilateral filter output luminance and so on 100% Presence means that the Luminance of the output picture is fully given by Segmented Luminance.A.B.Dr. Note that we are talking only about Luminance values. Lonere Image Cartoonize with Stegnography 1.T. 0% Presence means that the bilateral filter output luminance has no variation. To each segment it is assigned a value of Luminance given by the weighted average of the Histogram segment values.) 1. Example Yellow lines) Then these segments are merged with original “bilateral output” by a value called “Presence”. Dept of Information Technology 2 . (Like contour this works with CieLAB colour space.University. (Fig.

University.T. The pre-Effext can be one of this: 1.2 Add a Contour (red) 0-1-2-5-6 [1-2] A contour filter is applied to the Bilateral Output [1-5] Contour is applied to the bilateral output – To follow this path set Contour Amount > 0 and Segmentation Presence = 0 1.4.A.1 Pure Bilateral (black) 0-1-6 The Output Picture is the Output of Bilateral. After the “Bilateral filter” there are 4 possible paths to follow: 1.3 Luminance Segmentation (blue) 0-1-3-4-6 [1-3] A “Luminance Segmentation” is applied to the Bilateral Output.4 Flow As we can see.4.Dr.Brightness. (The Number of Segments is customizable) Dept of Information Technology 3 .Exposure Control 3. Auto Histogram Equalize 2.4. Lonere Image Cartoonize with Stegnography 1.B. Contrast & Saturation. – To follow this path set Contour Amount = 0 and Segmentation Presence = 0 1. before applying “Bilateral Filter” it's possible to apply a pre-Effect.

4. This merge is customizable with “Presence %” slidebar.4 CARTOON-like: Contour and Luminance Segmentation (green) 0-1-2-3-4-5-6 [1-2] A contour filter is applied to the Bilateral Output [1-3] A “Luminance Segmentation” is applied to the Bilateral Output.6.5 Parameters Details Parameters are saved in the files “LastSettings. 1. -- 1. Contrast. -CieLAB.2 Radius Radius of Intensity and Spatial Domains.University. Lonere Image Cartoonize with Stegnography [1-4] The Output is the result of the merging of the “Luminance Segmented” picture with the bilateral Output. -RGB Bilateral filter is computed from all channels Red.6 BILATERAL FILTER 1.6. This merge is customizable with “Presence %” slidebar.Dr. R=1 means that for each pixel the computed pixel will be the result of ((R*2)+1)^2 = 9 neighborhood pixels R=2 means that for each pixel the computed pixel will be the result of ((R*2)+1)^2 = 25 neighborhood pixels Dept of Information Technology 4 . 1. [4-5] Contour is applied to the output of above merging.1 Colour Space This is the Colour space the bilateral filter will be applied to.Green and Blue.txt” for easy reading. Here the bilateral filter is computed only from the Luminance (L) channel of the CieLab Colour Space faster. (The Number of Segments is customizable) [1-4] We merge the “Luminance Segmented” picture with the bilateral Output.4. – To follow this path set Contour Amount = 0 and Segmentation Presence > 0. 1.A.T.B. make a copy. Not to loose them.5 Pre-EFFECT As seen above it can be: Auto Equalize Exposure BCS Brightness.txt” and files “LastSettingsEX. Saturation 1. – To follow this path set Contour Amount > 0 and Segmentation Presence greater > 0.

B.Dr. 1.3 Intensity Mode This is the type of decay curve for the intensity space.7 CONTOUR Amount Contour Darkness Lum/Hue How much the contour is based on Luminance or Hue.6 Iterations How many times Bilateral Filter is applied.University.4 Intensity Sigma This is how fast the intensity space decrease 1.8 LUMINANCE SEGMENTATION Segments Number of Luminance Segments Samples This are good settings for cartoon-like results PreEFFECT: Brightness:110 Dept of Information Technology 5 .6. Lonere Image Cartoonize with Stegnography R=3 means that for each pixel the computed pixel will be the result of ((R*2)+1)^2 = 49 neighborhood pixels R=4 means that for each pixel the computed pixel will be the result of ((R*2)+1)^2 = 81 neighborhood pixels R=5 means that for each pixel the computed pixel will be the result of ((R*2)+1)^2 = 121 neighborhood pixels 1. it can be 0 Gaussian1 1 Gaussian2 2 InvProportional 3 Linear 1. 1. 1.6.T.A.6.6.5 Spatial Sigma This is how fast the Spatial space decrease going far from central pixel.

T.University.Dr.A.355 Spatial Sigma: 1000 Iterations: 3 CONTOUR: Amount : 100 Lum/Hue: .5 LUMINANCE SEGMENTATION: Segments: 4 Presence: 60% Dept of Information Technology 6 .B. Lonere Contrast :0 Saturation:100 Image Cartoonize with Stegnography BILATERAL FILTER: Colour Space: CieLAB Radius: 5 Intensity Mode : 1 Gaussian2 Intensity Sigma: 0.

The field of DIP refers to processing digital image by means of digital computer.T. who are limited to the visual band of the EM spectrum imaging machines cover almost the entire EM spectrum.1 Digital Image Processing 2. y) is called the intensity or gray level of the image at that point. unlike humans. so it is not surprising that image play the single most important role in human perception. we call the image a digital image.1 Background: Digital image processing is an area characterized by the need for extensive experimental work to establish the viability of proposed solutions to a given problem. Vision is the most advanced of our sensor. This is limiting & somewhat artificial boundary. where x & y are spatial coordinates. The area of image analysis (image understanding) is in between image processing & computer vision. However. Sometimes a distinction is made by defining image processing as a discipline in which both the input & output at a process are images. y & the amplitude values of f are all finite discrete quantities. Lonere Image Cartoonize with Stegnography CHAPTER 2 COLOURIZATION USING OPTIMIZATION 2. each of which has a particular location & value. They can operate also on images generated by sources that humans are not accustomed to associating with image. There is no general agreement among authors regarding where image processing stops & other related areas such as image analysis& computer vision start. ranging from gamma to radio waves.2 What is DIP? An image may be defined as a two-dimensional function f(x.A.1. 2.1. Digital image is composed of a finite number of elements. An important characteristic underlying the design of image processing systems is the significant level of testing & experimentation that normally is required before arriving at an acceptable solution.University.Dr. The elements are called pixels. y). Dept of Information Technology 7 . This characteristic implies that the ability to formulate approaches &quickly prototype candidate solutions generally plays a major role in reducing the cost & time required to arrive at a viable system implementation.B. & the amplitude of f at any pair of coordinates (x. When x.

b]  [0. A low. 2. mid-. b]I: [0. I (xylem) takes non-negative values assume the image is bounded by a rectangle [0. I(x. as in image analysis & at the far end of the continuum performing the cognitive functions normally associated with human vision. Mid-level process on images involves tasks such as segmentation. y) where x and y are spatial co-ordinates and the amplitude of „f‟ at any pair of coordinates (x. y) is called the intensity of the image at that point. info) Dept of Information Technology 8 . as already defined is used successfully in a broad range of areas of exceptional social & economic value. contrast enhancement & image sharpening.level processing involves “Making sense” of an ensemble of recognized objects. y) is the intensity of the image at the point (x.level process is characterized by the fact that both its inputs & outputs are images. description of that object to reduce them to a form suitable for computer processing & classification of individual objects. Lonere Image Cartoonize with Stegnography There are no clear-cut boundaries in the continuum from image processing at one end to complete vision at the other.B. Low-level process involves primitive operations such as image processing to reduce noise. a]  [0.Dr. However. & high-level processes. Finally higher. A mid-level process is characterized by the fact that its inputs generally are images but its outputs are attributes extracted from those images.University.1 What is an image? An image is represented as a two dimensional function f(x.T. y) on the image plane. Digital image processing. a] [0. one useful paradigm is to consider three types of computerized processes in this continuum: low-.A. 1 Gray scale image: A grayscale image is a function I (xylem) of the two spatial coordinates of the image plane.

Digitizing the amplitude values is called quantization. Digitizing the coordinate‟s values is called sampling. Converting such an image to digital form requires that the coordinates as well as the amplitude to be digitized.Dr. G (xylem) for green and B (xylem) for blue. An image may be continuous with respect to the x and y coordinates and also in amplitude. Dept of Information Technology 9 .B. Figure 2.1: Given a grayscale image marked with some colour scribbles by the user (left). Lonere 2 Color image: Image Cartoonize with Stegnography It can be represented by three functions. R (xylem) for red.A.T.University. our algorithm produces a colourized image (middle).

while the financial incentives are substantial. The term is now used generically to describe any technique for adding colour to monochrome stills and footage. Neither of these tasks can be performed reliably in practice. However. CR Categories: I. The process typically involves segmenting images into regions and tracking these regions across image sequences. consequently. Colourization of classic motion pictures has generated much controversy which partially accounts for the fact that not many of these movies have been colourized to date. Lonere Image Cartoonize with Stegnography Abstract: Colourization is a computer-assisted process of adding colour to a monochrome image or movie. colourization requires considerable user intervention and remains a tedious. and the indicated colours are automatically propagated in both space and time to produce a fully colourized image or sequence. as evidenced by multiple colourization tutorials on the World Wide Web . segmentation 2.4.2 Introduction Colourization is a term introduced by Wilson Markle in 1970 to describe the computer-assisted process he invented for adding colour to black and white movies or TV programs [Burns].University.Dr. in order to colourize a still image an Dept of Information Technology 10 .You couldn't make Wyatt Earp today for $1 million an episode. time-consuming. But for $50. We formalize this premise using a quadratic cost function and obtain an optimization problem that can be solved efficiently using standard techniques. Our method is based on a simple premise: neighboring pixels in space-time that have similar intensities should have similar colours.A. however. nor accurate region tracking.B. recolouring. For example. In our approach an artist only needs to annotate the image with a few colour scribbles.9 [Image Processing and Computer Vision]: Keywords: colourization.In this paper we present a simple colourization method that requires neither precise image segmentation. Colourization of still images also appears to be a topic of considerable interest among users of image editing software.A major difficulty with colourization. there are still massive amounts of black and white television shows that could be colourized: the artistic controversy is often irrelevant here. you can turn it into colour and have a brand new series with no residuals to pay.000 a segment. and expensive task. as was succinctly pointed out by Earl Glick1 in 1984: . We demonstrate that high quality colourizations of stills and movie clips may be obtained from a relatively modest amount of user input.T. lies in the fact that it is an expensive and time-consuming process.

Motion detection and tracking is then applied.A. 2. allowing colours to be automatically assigned to other frames in regions where no motion occurs. In this paper we describe a new interactive colourization technique that requires neither precise manual segmentation. Dept of Information Technology 11 . which often requires manual fixing by the operator.3 Previous work: In Markle's original colourization process [Markle and Hunt 1987] a colour mask is manually painted for at least one reference frame in a shot. an extremely useful operation in digital photography and in special effects. in addition. our technique is also applicable to selective recolouring. Colourization of movies requires. Unfortunately. automatic segmentation algorithms often fail to correctly identify fuzzy or complex region boundaries. the artist is often left with the task of manually delineating complicated boundaries between regions. instead of tracing out its precise boundary. such as the boundary between a subject's hair and her face. This assumption leads to an optimization problem that can be solved efficiently using standard techniques.B. is a new simple yet surprisingly effective interactive colourization technique that drastically reduces the amount of input required from the user. This colourization process is demonstrated in Figure 1. and then proceeds to assign a colour to each region. Although not much is publicly known about the techniques used in more contemporary colourization systems used in the industry. Colours in the vicinity of moving edges are assigned using optical flow.Dr. Lonere Image Cartoonize with Stegnography artist typically begins by segmenting the image into regions. there are indications that these systems still rely on defining regions and tracking them between the frames of a shot. The user indicates how each region should be coloured by scribbling the desired colour in the interior of the region. The underlying algorithm is based on the simple premise that nearby pixels in space-time that have similar gray levels should also have similar colours. Using these user supplied constraints our technique automatically propagates colours to the remaining pixels in the image sequence. tracking regions across the frames of a shot. The technique is based on a unified framework applicable to both still images and image sequences. In addition to colourization of black and white images and movies. thus. nor accurate tracking. Thus. Our contribution. Existing tracking algorithms typically fail to robustly track non-rigid regions.University.T. again requiring massive user intervention in the process.

y. a commercial software for colourizing still images.T. and is able to retune the results by scribbling more colour where necessary. We Dept of Information Technology 12 . and small when the two intensities are different. encoding the colour [Jack 2001]. note that the artistic control over the outcome is quite indirect: the artist must and reference images containing the desired colours over regions with similar textures to those that she wishes to colourize. we wish to impose the constraint that two neighbouring pixels r. where Y is the monochromatic luminance channel.4 Algorithm: We work in YUV colour space. while U and V are the chrominance channels. describe a semi-automatic technique for colourizing a grayscale image by transferring colour from a reference colour image.A. the user must direct the search for matching pixels by specifying swatches indicating corresponding regions in the two images. Thus.University. but the segmentation task is left entirely to the user. Y(r) is the intensity of a particular pixel. and in some images it may assign vastly different colours to neighbouring pixels that have similar intensities. s) to denote (x. we wish to minimize the difference between the colour U(r) at pixel r and the weighted average of the colours at neighbouring pixels: where wrs is a weighting function that sums to one. While this technique has produced some impressive results. t) and outputs two colour volumes U(x. y. The algorithm is given as input an intensity volume Y(x. s should have similar colours if their intensities are similar. commonly used in video. In other cases. t) and V(x. or possess distinct textures.Dr. Similar weighting functions are used extensively in image segmentation algorithms where they are usually referred to as affinity functions. They examine the luminance values in the neighbourhood of each pixel in the target image and transfer the colour from pixels with matching neighbourhoods in the reference image.B. This technique works well on images where differently coloured regions give rise to distinct luminance clusters. in our technique the artist chooses the colours directly. large when Y(r) is similar to Y(s). Lonere Image Cartoonize with Stegnography Black Magic. y. In contrast. Does not explicitly enforce spatial continuity of the colours. r. 2. It is also difficult to retune the outcome selectively in problematic areas. As mentioned in the introduction. provides the user with useful brushes and colour palettes. To simplify notation we will use boldface letters (e. t).g. Thus. y. which we will refer to simply as intensity. t) triplets.

t) is a neighbour of pixel (x1. Formally.B. J(V) subject to these constraints. The notation r belongs N(s) denotes the fact that r and s are neighbouring pixels. While this model adds to the system a pair of variables per each image window. Lonere Image Cartoonize with Stegnography have experimented with two weighting functions.University. y). t +1) if: The flow field vx(x0). Then the pixel (x0. More formally.A. are nearby. after accounting for motion. this Dept of Information Technology 13 . not to propagate colours through time.bi are the same for all pixels in a small neighbourhood around r.bi variables yields an equation equivalent to equation 1 with a correlation based affinity function. a simple elimination of the ai. y0. y) denote the optical flow calculated at time t. The correlation affinity can also be derived from assuming a local linear relation between colour and intensity. Between two successive frames. This assumption can be justified empirically and intuitively it means that when the intensity is constant the colour should be constant.Dr. Since the cost functions are quadratic and the constraints are linear. and when the intensity is an edge the colour should also be an edge (although the values on the two sides of the edge can be any two numbers). In a single frame. Note that the optical flow is only used to define the neighbourhood of each pixel. Now given a set of locations ri where the colours are specified by the user u(ri) = ui. y1.T. vy(y0) is calculated using a standard motion estimation algorithm [Lucas and Kanade 1981]. we define two pixels as neighbours if their image locations are nearby. let vx(x. v(ri) = vi we minimize J(U). we define two pixels as neighbours if their image locations. The simplest one is commonly used by image segmentation algorithms and is based on the squared difference between the two intensities: A second weighting function is based on the normalized correlation between the two intensities: where mr and sr are the mean and variance of the intensities in a window around r. it assumes that the colour at a pixel U(r) is a linear function of the intensity Y(r): U(r) = aiY(r)+bi and the linear coefficients ai. vy(x.

e. one attempts to find the second smallest eigenvector of the matrix D -W where W is a n pixels matrix whose elements are the pair wise affinities between pixels (i. the r.B. The second smallest eigenvector of any symmetric matrix A is a unit norm vector x that minimizes xTAx and is orthogonal to the first eigenvector. Thus. sparse system of linear equations. In image denoising algorithms based on anisotropic diffusion one often minimizes a function similar to equation 1. Our algorithm is closely related to algorithms proposed for other tasks in image processing.. but the function is applied to the image intensity as well. Lonere Image Cartoonize with Stegnography optimization problem yields a large. Dept of Information Technology 14 . that is xT(D-W)x = J(x).T.University.A. In image segmentation algorithms based on normalized cuts [Shi and Malik 1997]. By direct inspection. our algorithm minimizes the same cost function but under different constraints. the quadratic form minimized by normalized cuts is exactly our cost function J. which may be solved using a number of standard methods. s entry of the matrix is wrs) and D is a diagonal matrix whose diagonal elements are the sum of the affinities (in our case this is always 1).Dr.

since colours are not propagated across intensity boundaries. the artist may want to start with a small number of colour scribbles. Specifically. the final colour should be the colour of the scribble. the artist first defines a rough mask around it and then scribbles inside the orange using the desired colour. Figure 2 shows some still gray scale images marked by the user's colour scribbles next to the corresponding colourization results. very convincing results are generated by our algorithm even from a relatively small number of colour scribbles. or equivalently using the local linearity assumption). Figures 5 and 6 show selected frames from colourized movie clips. For still images we used Matlab's built in least squares solver for sparse linear systems. Note that unlike global colourmap manipulations.T. The bottom row of the figure shows another example. and for the movie sequences we used a multigrid solver. we minimize the cost (equation 1) under two groups of constraints. Typically. Lonere Image Cartoonize with Stegnography 2.Dr.5 RESULTS: The results shown here were all obtained using the correlation based window (equation 3. First. Figure 3 demonstrates such a progression on a still image. so in these examples we did not have a colour reference to pick the colours from. Figures 7 and 8 compare our method to two alternative methods. for pixels outside the mask. and then _ne-tune the colourization results by adding more scribbles. for pixels covered by the user's scribbles. All other colours are automatically determined by the optimization process. Using the multigrid solver. the resulting colourization is surprisingly convincing. Second. Since automating the choice of colours was not our goal in this work. the colour should be the same as the original colour. Our technique is then used to propagate the green colour until an intensity boundary is found.s for each pixel were calculated by giving more weight to pixels with similar intensities. Figure 4 shows how our technique can be applied to recolouring. we used the original colour channels of each image when picking the colours. In this application the affinity between pixels is based not only on similarity of their intensities. The mean and variance m. The threshold T in equation 4 was set to 1 so that the window used was 3*3*3. We have also successfully colourized several short clips from the television show . In figure 7 the alternative method is one where the image is first Dept of Information Technology 15 . As can be seen. Even though the total number of colour scribbles is quite modest. the run time was approximately 15 seconds per frame.I Love Lucy. Visually similar results were also obtained with the Gaussian window (equation 2).A. To change the colour of an orange in the top left image to green. our algorithm does not recolour the other orange in the image.B. The original clips were obviously in black and white. but also on the similarity of their colours in the original image.University. and from Chaplin's classic movie Modern Times.

either using automatic segmentation or using tracking to propagate colours across time. it is much more robust to tracking failures. the results could be improved using more sophisticated algorithms. An advantage of our optimization framework is that we use segmentation cues and optical low as hints. Lonere Image Cartoonize with Stegnography segmented automatically and then the scribbled colours are used to “flood fill” each segment. In both cases. Figure 8 compares our method for colourizing image sequences to an alternative method where a single frame is colourized and then optical flow tracking is used to propagate the colours across time.University.B. such as the intricate boundary between the hair and the forehead. Yet despite many years of research in computer vision. Likewise.A.T. Figure 7a shows the result of automatic segmentation computed using a version of the normalized cuts algorithm [Shi and Malik 1997]. state-of-the-art algorithms still do not work perfectly in an automatic fashion. or the low contrast boundary between the lips and the face. Segmentation is a very difficult problem and even state-of-the-art methods may fail to automatically delineate all the correct boundaries.Dr. the same colour scribbles were used. for the correct colourization but the colourization can be quite good even when these hints are wrong. In both cases. if the automatic segmentation had been perfect then flood filling segments would have produced perfect results. the colourization achieved with this alternative method (figure 7b) is noticeably worse than the one computed by our method (figure 7c). Consequently. Distinctive colours were deliberately chosen so that flaws in the colourization would be more apparent. Since our method uses optical flow only to define the local neighbourhood. Dept of Information Technology 16 . if dense optical flow had been perfect then propagating colours from a single frame would have also worked perfectly. In other words.

B. Top row: the input black-white image with scribbled colors. Bottom row: resulting color image (a1) (b1) (c1) (a2) (b2) (c2) Dept of Information Technology 17 .A.University.T.Dr. Lonere Image Cartoonize with Stegnography Figure 2: Still image colorization examples.

Segmenting fuzzy hair boundary is a difficult task for typical segmentation methods. For visualization purposes disctinctive colors were used. which yield the result in (a2). some colour is bleeding from the cyan pacifier onto the wall behind it. (a) Segmented image. By adding colour scribbles on the table cloth and on the wall (b1) these problems are eliminated (b2).A. Next. Lonere Image Cartoonize with Stegnography Figure 3: Progressively improving a colorization. (b) Result of colouring each segment with a constant color. Note that it was not necessary to mark each and every bead (a) (b) (c) Figure 4: A comparison with automatic segmentation. The artist begins with the scribbles shown in (a1). the artist decides to change the colour of the beads by sprinkling a few red pixels (c1). Also.T.B.University. (c) Our result. yielding the final result (c2).Dr. Dept of Information Technology 18 . Note that the table cloth gets the same pink colour as the girl's dress.

Sign up to vote on this title
UsefulNot useful