University, Lonere

Image Cartoonize with Stegnography

This application applies Multipass Bilateral Filter to colour Images. Bilateral filtering is an Edge-preserving smoothing filter. This technique extends the concept of Gaussian smoothing by weighting the filter coefficients with their corresponding relative pixel intensities. Pixels that are very different in intensity from the central pixel are weighted less even though they may be in close proximity to the central pixel. This is effectively a convolution with a non-linear Gaussian filter, with weights based on pixel intensities. This is applied as two Gaussian filters at a localized pixel neighbourhood , one in the spatial domain, and one in the intensity domain.

1.1.1 Changes
V6 – Better Contour Algorithm; Luminance segmentation; Bilateral Filter optimization; Interface look.

1.1.2 Cartoon
To obtain cartoon-like results, these new effects have been implemented:

1.1.3 Contour
Contour, when used, is applied to the Bilateral Filter Output in the CieLAB colour space. It has 2 parameters: General Amount, and how much it is based on Luminance (L) or Hue (AB). It's created with a Sobel filter applied in separated channels: “L” and “A & B” (of the CieLAB colourspace).

Dept of Information Technology


Note that we are talking only about Luminance values.) 1.B.University.3 Algorithm: It's built an histogram of the Luminance channel (L) of Bilateral filtered picture. (Fig. 0% Presence means that the bilateral filter output luminance has no variation. (Like contour this works with CieLAB colour space.2 Luminance Segmentation This is a nice way to obtain a more cartoon-like output.Dr. Lonere Image Cartoonize with Stegnography 1. To each segment it is assigned a value of Luminance given by the weighted average of the Histogram segment values.A. Dept of Information Technology 2 . (No Segmentation applied) 25% Presence means that the Luminance of the output picture is given by 25% of the Segmented Luminance and by 75% of the bilateral filter output luminance and so on 100% Presence means that the Luminance of the output picture is fully given by Segmented Luminance. The best way to have an idea of how this work is to tweak these parameters and to watch the output results. Example Yellow lines) Then these segments are merged with original “bilateral output” by a value called “Presence”.T. Then this histogram is divided by N segments (in a way so that in each segment there's the same number of intensities values).

1 Pure Bilateral (black) 0-1-6 The Output Picture is the Output of Bilateral. After the “Bilateral filter” there are 4 possible paths to follow: 1.2 Add a Contour (red) 0-1-2-5-6 [1-2] A contour filter is applied to the Bilateral Output [1-5] Contour is applied to the bilateral output – To follow this path set Contour Amount > 0 and Segmentation Presence = 0 1.A.4 Flow As we can see.Exposure Control 3.4.Dr.B.3 Luminance Segmentation (blue) 0-1-3-4-6 [1-3] A “Luminance Segmentation” is applied to the Bilateral Output.University. before applying “Bilateral Filter” it's possible to apply a pre-Effect.4.4.T.Brightness. (The Number of Segments is customizable) Dept of Information Technology 3 . Auto Histogram Equalize 2. Lonere Image Cartoonize with Stegnography 1. Contrast & Saturation. – To follow this path set Contour Amount = 0 and Segmentation Presence = 0 1. The pre-Effext can be one of this: 1.

4 CARTOON-like: Contour and Luminance Segmentation (green) 0-1-2-3-4-5-6 [1-2] A contour filter is applied to the Bilateral Output [1-3] A “Luminance Segmentation” is applied to the Bilateral Output.6. -CieLAB. -RGB Bilateral filter is computed from all channels Red. This merge is customizable with “Presence %” slidebar. Lonere Image Cartoonize with Stegnography [1-4] The Output is the result of the merging of the “Luminance Segmented” picture with the bilateral Output.4.txt” and files “LastSettingsEX.Dr. This merge is customizable with “Presence %” slidebar. – To follow this path set Contour Amount > 0 and Segmentation Presence greater > 0.University. make a copy. (The Number of Segments is customizable) [1-4] We merge the “Luminance Segmented” picture with the bilateral Output. R=1 means that for each pixel the computed pixel will be the result of ((R*2)+1)^2 = 9 neighborhood pixels R=2 means that for each pixel the computed pixel will be the result of ((R*2)+1)^2 = 25 neighborhood pixels Dept of Information Technology 4 . Contrast.1 Colour Space This is the Colour space the bilateral filter will be applied to. -- 1. Here the bilateral filter is computed only from the Luminance (L) channel of the CieLab Colour Space faster.4.txt” for easy reading. – To follow this path set Contour Amount = 0 and Segmentation Presence > 0.Green and Blue. [4-5] Contour is applied to the output of above merging.5 Pre-EFFECT As seen above it can be: Auto Equalize Exposure BCS Brightness. Saturation 1. 1.6.B. 1. 1. Not to loose them.6 BILATERAL FILTER 1.A.2 Radius Radius of Intensity and Spatial Domains.5 Parameters Details Parameters are saved in the files “LastSettings.T.

1.A.6.6.4 Intensity Sigma This is how fast the intensity space decrease 1.B. Lonere Image Cartoonize with Stegnography R=3 means that for each pixel the computed pixel will be the result of ((R*2)+1)^2 = 49 neighborhood pixels R=4 means that for each pixel the computed pixel will be the result of ((R*2)+1)^2 = 81 neighborhood pixels R=5 means that for each pixel the computed pixel will be the result of ((R*2)+1)^2 = 121 neighborhood pixels 1.6.University.7 CONTOUR Amount Contour Darkness Lum/Hue How much the contour is based on Luminance or Hue. 1. it can be 0 Gaussian1 1 Gaussian2 2 InvProportional 3 Linear 1.5 Spatial Sigma This is how fast the Spatial space decrease going far from central pixel. 1.3 Intensity Mode This is the type of decay curve for the intensity space.8 LUMINANCE SEGMENTATION Segments Number of Luminance Segments Samples This are good settings for cartoon-like results PreEFFECT: Brightness:110 Dept of Information Technology 5 .6 Iterations How many times Bilateral Filter is applied.Dr.6.T.

355 Spatial Sigma: 1000 Iterations: 3 CONTOUR: Amount : 100 Lum/Hue: .Dr.T.A.5 LUMINANCE SEGMENTATION: Segments: 4 Presence: 60% Dept of Information Technology 6 . Lonere Contrast :0 Saturation:100 Image Cartoonize with Stegnography BILATERAL FILTER: Colour Space: CieLAB Radius: 5 Intensity Mode : 1 Gaussian2 Intensity Sigma: 0.University.B.

so it is not surprising that image play the single most important role in human perception.1 Digital Image Processing 2. They can operate also on images generated by sources that humans are not accustomed to associating with image. An important characteristic underlying the design of image processing systems is the significant level of testing & experimentation that normally is required before arriving at an acceptable solution.University. & the amplitude of f at any pair of coordinates (x. Dept of Information Technology 7 .2 What is DIP? An image may be defined as a two-dimensional function f(x.B. This is limiting & somewhat artificial boundary. ranging from gamma to radio waves. where x & y are spatial coordinates. Sometimes a distinction is made by defining image processing as a discipline in which both the input & output at a process are images. unlike humans. The elements are called pixels.A. However. y). y & the amplitude values of f are all finite discrete quantities. Digital image is composed of a finite number of elements. This characteristic implies that the ability to formulate approaches &quickly prototype candidate solutions generally plays a major role in reducing the cost & time required to arrive at a viable system implementation. we call the image a digital image. When x. The area of image analysis (image understanding) is in between image processing & computer vision. The field of DIP refers to processing digital image by means of digital computer. each of which has a particular location & value. who are limited to the visual band of the EM spectrum imaging machines cover almost the entire EM spectrum.1. Vision is the most advanced of our sensor. y) is called the intensity or gray level of the image at that point.1 Background: Digital image processing is an area characterized by the need for extensive experimental work to establish the viability of proposed solutions to a given problem. 2.T. There is no general agreement among authors regarding where image processing stops & other related areas such as image analysis& computer vision start. Lonere Image Cartoonize with Stegnography CHAPTER 2 COLOURIZATION USING OPTIMIZATION 2.Dr.1.

a] [0. a]  [0. y) is the intensity of the image at the point (x.T.Dr. b]I: [0. y) is called the intensity of the image at that point. A mid-level process is characterized by the fact that its inputs generally are images but its outputs are attributes extracted from those images. mid-.level process is characterized by the fact that both its inputs & outputs are images. y) where x and y are spatial co-ordinates and the amplitude of „f‟ at any pair of coordinates (x.B. I (xylem) takes non-negative values assume the image is bounded by a rectangle [0. description of that object to reduce them to a form suitable for computer processing & classification of individual objects. one useful paradigm is to consider three types of computerized processes in this continuum: low-.A. y) on the image plane. as in image analysis & at the far end of the continuum performing the cognitive functions normally associated with human vision.University.1 What is an image? An image is represented as a two dimensional function f(x. A low. 1 Gray scale image: A grayscale image is a function I (xylem) of the two spatial coordinates of the image plane. Digital image processing. 2. contrast enhancement & image sharpening.level processing involves “Making sense” of an ensemble of recognized objects. However. I(x. info) Dept of Information Technology 8 . Mid-level process on images involves tasks such as segmentation. Low-level process involves primitive operations such as image processing to reduce noise. Lonere Image Cartoonize with Stegnography There are no clear-cut boundaries in the continuum from image processing at one end to complete vision at the other. b]  [0. & high-level processes. as already defined is used successfully in a broad range of areas of exceptional social & economic value. Finally higher.

R (xylem) for red. Converting such an image to digital form requires that the coordinates as well as the amplitude to be digitized. Dept of Information Technology 9 . Lonere 2 Color image: Image Cartoonize with Stegnography It can be represented by three functions. Figure 2. An image may be continuous with respect to the x and y coordinates and also in amplitude.Dr. Digitizing the coordinate‟s values is called sampling.A.T. our algorithm produces a colourized image (middle). G (xylem) for green and B (xylem) for blue.University.B. Digitizing the amplitude values is called quantization.1: Given a grayscale image marked with some colour scribbles by the user (left).

Colourization of still images also appears to be a topic of considerable interest among users of image editing software. recolouring. The process typically involves segmenting images into regions and tracking these regions across image sequences. In our approach an artist only needs to annotate the image with a few colour scribbles. CR Categories: I. colourization requires considerable user intervention and remains a tedious. For example. nor accurate region tracking. The term is now used generically to describe any technique for adding colour to monochrome stills and footage. time-consuming. and the indicated colours are automatically propagated in both space and time to produce a fully colourized image or sequence. and expensive task. Neither of these tasks can be performed reliably in practice.University.In this paper we present a simple colourization method that requires neither precise image segmentation. as was succinctly pointed out by Earl Glick1 in 1984: . We formalize this premise using a quadratic cost function and obtain an optimization problem that can be solved efficiently using standard techniques. you can turn it into colour and have a brand new series with no residuals to pay. Colourization of classic motion pictures has generated much controversy which partially accounts for the fact that not many of these movies have been colourized to date.A major difficulty with colourization. however. segmentation 2.A. We demonstrate that high quality colourizations of stills and movie clips may be obtained from a relatively modest amount of user input.Dr.9 [Image Processing and Computer Vision]: Keywords: colourization. But for $50.2 Introduction Colourization is a term introduced by Wilson Markle in 1970 to describe the computer-assisted process he invented for adding colour to black and white movies or TV programs [Burns].000 a segment. while the financial incentives are substantial. consequently.4. However. lies in the fact that it is an expensive and time-consuming process. there are still massive amounts of black and white television shows that could be colourized: the artistic controversy is often irrelevant here. as evidenced by multiple colourization tutorials on the World Wide Web .B.T. Our method is based on a simple premise: neighboring pixels in space-time that have similar intensities should have similar colours. in order to colourize a still image an Dept of Information Technology 10 . Lonere Image Cartoonize with Stegnography Abstract: Colourization is a computer-assisted process of adding colour to a monochrome image or movie.You couldn't make Wyatt Earp today for $1 million an episode.

is a new simple yet surprisingly effective interactive colourization technique that drastically reduces the amount of input required from the user. 2. The user indicates how each region should be coloured by scribbling the desired colour in the interior of the region. The technique is based on a unified framework applicable to both still images and image sequences.T. Using these user supplied constraints our technique automatically propagates colours to the remaining pixels in the image sequence. automatic segmentation algorithms often fail to correctly identify fuzzy or complex region boundaries. Our contribution. nor accurate tracking. and then proceeds to assign a colour to each region.3 Previous work: In Markle's original colourization process [Markle and Hunt 1987] a colour mask is manually painted for at least one reference frame in a shot. Although not much is publicly known about the techniques used in more contemporary colourization systems used in the industry. there are indications that these systems still rely on defining regions and tracking them between the frames of a shot. Colours in the vicinity of moving edges are assigned using optical flow. In addition to colourization of black and white images and movies.Dr. such as the boundary between a subject's hair and her face. Colourization of movies requires. Unfortunately. thus. allowing colours to be automatically assigned to other frames in regions where no motion occurs. instead of tracing out its precise boundary. Thus. Existing tracking algorithms typically fail to robustly track non-rigid regions. Motion detection and tracking is then applied. This colourization process is demonstrated in Figure 1. tracking regions across the frames of a shot.B. Dept of Information Technology 11 . again requiring massive user intervention in the process. In this paper we describe a new interactive colourization technique that requires neither precise manual segmentation. our technique is also applicable to selective recolouring. The underlying algorithm is based on the simple premise that nearby pixels in space-time that have similar gray levels should also have similar colours. in addition.University. which often requires manual fixing by the operator. the artist is often left with the task of manually delineating complicated boundaries between regions. an extremely useful operation in digital photography and in special effects. This assumption leads to an optimization problem that can be solved efficiently using standard techniques. Lonere Image Cartoonize with Stegnography artist typically begins by segmenting the image into regions.A.

Y(r) is the intensity of a particular pixel. commonly used in video. y. They examine the luminance values in the neighbourhood of each pixel in the target image and transfer the colour from pixels with matching neighbourhoods in the reference image.g. y. We Dept of Information Technology 12 . and is able to retune the results by scribbling more colour where necessary. While this technique has produced some impressive results. describe a semi-automatic technique for colourizing a grayscale image by transferring colour from a reference colour image. To simplify notation we will use boldface letters (e.4 Algorithm: We work in YUV colour space. in our technique the artist chooses the colours directly.University. Similar weighting functions are used extensively in image segmentation algorithms where they are usually referred to as affinity functions. y. t). Does not explicitly enforce spatial continuity of the colours. 2.A. y. large when Y(r) is similar to Y(s). where Y is the monochromatic luminance channel. r. we wish to impose the constraint that two neighbouring pixels r. s) to denote (x. t) and outputs two colour volumes U(x. t) and V(x. Thus. the user must direct the search for matching pixels by specifying swatches indicating corresponding regions in the two images. s should have similar colours if their intensities are similar. t) triplets. a commercial software for colourizing still images. which we will refer to simply as intensity. and in some images it may assign vastly different colours to neighbouring pixels that have similar intensities. In other cases.Dr. we wish to minimize the difference between the colour U(r) at pixel r and the weighted average of the colours at neighbouring pixels: where wrs is a weighting function that sums to one. but the segmentation task is left entirely to the user. or possess distinct textures. and small when the two intensities are different. encoding the colour [Jack 2001]. note that the artistic control over the outcome is quite indirect: the artist must and reference images containing the desired colours over regions with similar textures to those that she wishes to colourize. As mentioned in the introduction. It is also difficult to retune the outcome selectively in problematic areas. The algorithm is given as input an intensity volume Y(x.B. Thus. In contrast. This technique works well on images where differently coloured regions give rise to distinct luminance clusters. while U and V are the chrominance channels. Lonere Image Cartoonize with Stegnography Black Magic. provides the user with useful brushes and colour palettes.T.

y1. y). Note that the optical flow is only used to define the neighbourhood of each pixel. Now given a set of locations ri where the colours are specified by the user u(ri) = ui. The notation r belongs N(s) denotes the fact that r and s are neighbouring pixels.Dr.bi variables yields an equation equivalent to equation 1 with a correlation based affinity function. v(ri) = vi we minimize J(U).bi are the same for all pixels in a small neighbourhood around r.University. In a single frame. y0. this Dept of Information Technology 13 .A. While this model adds to the system a pair of variables per each image window. The correlation affinity can also be derived from assuming a local linear relation between colour and intensity. Between two successive frames. and when the intensity is an edge the colour should also be an edge (although the values on the two sides of the edge can be any two numbers). t) is a neighbour of pixel (x1.T. The simplest one is commonly used by image segmentation algorithms and is based on the squared difference between the two intensities: A second weighting function is based on the normalized correlation between the two intensities: where mr and sr are the mean and variance of the intensities in a window around r. not to propagate colours through time. More formally. Since the cost functions are quadratic and the constraints are linear. Formally. after accounting for motion.B. Then the pixel (x0. are nearby. we define two pixels as neighbours if their image locations are nearby. we define two pixels as neighbours if their image locations. Lonere Image Cartoonize with Stegnography have experimented with two weighting functions. t +1) if: The flow field vx(x0). vy(y0) is calculated using a standard motion estimation algorithm [Lucas and Kanade 1981]. a simple elimination of the ai. let vx(x. J(V) subject to these constraints. y) denote the optical flow calculated at time t. it assumes that the colour at a pixel U(r) is a linear function of the intensity Y(r): U(r) = aiY(r)+bi and the linear coefficients ai. vy(x. This assumption can be justified empirically and intuitively it means that when the intensity is constant the colour should be constant.

A. Dept of Information Technology 14 .B. In image segmentation algorithms based on normalized cuts [Shi and Malik 1997]. By direct inspection. but the function is applied to the image intensity as well. sparse system of linear equations. which may be solved using a number of standard methods. s entry of the matrix is wrs) and D is a diagonal matrix whose diagonal elements are the sum of the affinities (in our case this is always 1). one attempts to find the second smallest eigenvector of the matrix D -W where W is a n pixels matrix whose elements are the pair wise affinities between pixels (i. Thus. In image denoising algorithms based on anisotropic diffusion one often minimizes a function similar to equation 1. our algorithm minimizes the same cost function but under different constraints. Our algorithm is closely related to algorithms proposed for other tasks in image processing. the r.. Lonere Image Cartoonize with Stegnography optimization problem yields a large.e.T. the quadratic form minimized by normalized cuts is exactly our cost function J. The second smallest eigenvector of any symmetric matrix A is a unit norm vector x that minimizes xTAx and is orthogonal to the first eigenvector.Dr.University. that is xT(D-W)x = J(x).

Lonere Image Cartoonize with Stegnography 2. so in these examples we did not have a colour reference to pick the colours from. and then _ne-tune the colourization results by adding more scribbles. All other colours are automatically determined by the optimization process. we minimize the cost (equation 1) under two groups of constraints. we used the original colour channels of each image when picking the colours. Note that unlike global colourmap manipulations.s for each pixel were calculated by giving more weight to pixels with similar intensities. Even though the total number of colour scribbles is quite modest. In this application the affinity between pixels is based not only on similarity of their intensities.I Love Lucy. the artist may want to start with a small number of colour scribbles. Second. Typically. the artist first defines a rough mask around it and then scribbles inside the orange using the desired colour. Visually similar results were also obtained with the Gaussian window (equation 2). The bottom row of the figure shows another example. the run time was approximately 15 seconds per frame. Figures 7 and 8 compare our method to two alternative methods. and from Chaplin's classic movie Modern Times. for pixels outside the mask. Figures 5 and 6 show selected frames from colourized movie clips. Specifically. As can be seen. Figure 4 shows how our technique can be applied to recolouring.University. Figure 2 shows some still gray scale images marked by the user's colour scribbles next to the corresponding colourization results. Figure 3 demonstrates such a progression on a still image. First. The threshold T in equation 4 was set to 1 so that the window used was 3*3*3.A. very convincing results are generated by our algorithm even from a relatively small number of colour scribbles. To change the colour of an orange in the top left image to green. The mean and variance m.T. We have also successfully colourized several short clips from the television show . and for the movie sequences we used a multigrid solver.B. the colour should be the same as the original colour. or equivalently using the local linearity assumption). for pixels covered by the user's scribbles. Our technique is then used to propagate the green colour until an intensity boundary is found. For still images we used Matlab's built in least squares solver for sparse linear systems. Since automating the choice of colours was not our goal in this work. the resulting colourization is surprisingly convincing. the final colour should be the colour of the scribble.5 RESULTS: The results shown here were all obtained using the correlation based window (equation 3. In figure 7 the alternative method is one where the image is first Dept of Information Technology 15 . The original clips were obviously in black and white. since colours are not propagated across intensity boundaries. Using the multigrid solver.Dr. but also on the similarity of their colours in the original image. our algorithm does not recolour the other orange in the image.

Likewise. it is much more robust to tracking failures. Distinctive colours were deliberately chosen so that flaws in the colourization would be more apparent. Lonere Image Cartoonize with Stegnography segmented automatically and then the scribbled colours are used to “flood fill” each segment. the results could be improved using more sophisticated algorithms. such as the intricate boundary between the hair and the forehead.B. In other words. the same colour scribbles were used.Dr. Yet despite many years of research in computer vision. Figure 7a shows the result of automatic segmentation computed using a version of the normalized cuts algorithm [Shi and Malik 1997].T. state-of-the-art algorithms still do not work perfectly in an automatic fashion. or the low contrast boundary between the lips and the face. Consequently. Segmentation is a very difficult problem and even state-of-the-art methods may fail to automatically delineate all the correct boundaries. Dept of Information Technology 16 . either using automatic segmentation or using tracking to propagate colours across time. In both cases. An advantage of our optimization framework is that we use segmentation cues and optical low as hints. if the automatic segmentation had been perfect then flood filling segments would have produced perfect results. Figure 8 compares our method for colourizing image sequences to an alternative method where a single frame is colourized and then optical flow tracking is used to propagate the colours across time. for the correct colourization but the colourization can be quite good even when these hints are wrong.University. In both cases. the colourization achieved with this alternative method (figure 7b) is noticeably worse than the one computed by our method (figure 7c).A. Since our method uses optical flow only to define the local neighbourhood. if dense optical flow had been perfect then propagating colours from a single frame would have also worked perfectly.

Dr. Bottom row: resulting color image (a1) (b1) (c1) (a2) (b2) (c2) Dept of Information Technology 17 . Lonere Image Cartoonize with Stegnography Figure 2: Still image colorization examples. Top row: the input black-white image with scribbled colors.A.B.T.University.

Lonere Image Cartoonize with Stegnography Figure 3: Progressively improving a colorization.Dr. For visualization purposes disctinctive colors were used.T. By adding colour scribbles on the table cloth and on the wall (b1) these problems are eliminated (b2).A. Dept of Information Technology 18 . Also. which yield the result in (a2).B.University. Segmenting fuzzy hair boundary is a difficult task for typical segmentation methods. (c) Our result. Note that the table cloth gets the same pink colour as the girl's dress. Note that it was not necessary to mark each and every bead (a) (b) (c) Figure 4: A comparison with automatic segmentation. The artist begins with the scribbles shown in (a1). yielding the final result (c2). (b) Result of colouring each segment with a constant color. (a) Segmented image. the artist decides to change the colour of the beads by sprinkling a few red pixels (c1). some colour is bleeding from the cyan pacifier onto the wall behind it. Next.