Skip to content


Color Stippling

Stippling using a CMYK color basis. Original image from

Stippling applying the Hard Disk Sampling method with a CMYK color basis. Original image from Click for full resolution.

At the end of writing my post on Stippling and Blue Noise, I left it wondering how would this apply if we wanted to use colors. Just to recapitulate, the idea of stippling is to generate a reproduction of an image by halftoning, making use of fixed-size points distributed on a plane. The points need to be distributed in a way which is both aesthetically pleasing (for which we place the points carefully in a way which shows Blue Noise properties) and be spaced according to the local density of the image. If we use black dots over a white background, this simply means we need to add more dots on the darker areas of the image.

This is all better summarized in this awesome video:

It also illustrates very well why we would want to use a computer program to carry out this task 😉

More than one sample class

So the principle looks simple enough for black and white. Now, if we want to add color, the first immediate question is: Can we not just do the same for each individual R, G, B or C, M, Y, K channel? As it turns out, that would more or less work, but it will look worse than the black and white image. The reason for it is that combining individual Blue Noise distributions does not lead to a result with the same propertie. This is easy to reason about by looking at the first row on Figure 1 below: if we place samples carefully by enforcing a minimum distance between each one of them, but do so on a per-class (color) basis, nothing is preventing points belonging to different classes from ending up close to each other. Similarly, as shown in the second row, merging all samples into a single uniform distribution, and then sepparating them again into individual classes will cause noticeable gaps in the individual classes’ distribution.

Top row: Uniform individual classes does not lead to uniform combined result. Bottom row: Uniform result does not lead to uniform individual classes.

Figure 1. Top row: Uniform individual classes does not lead to uniform combined result.
Bottom row: Uniform result does not lead to uniform individual classes.
Image by [Wei 10], click for a better view.

A solution to this is to consider all color channels into a common distribution, but preserve class information while sampling. Li-Yi Wei wrote about this naming the method Multi-Class Blue Noise [Wei 10]. In his paper, Li-Yi discusses two methods of achieving color stippling (or, in general, distribution of N classes of elements in the same space). On the simpler approach called Hard-Disk Sampling, which is derived from the original Poisson Sampling or Dart-Throwing method [Cook 86], we work by extending the acceptance test to check neighbour points of all classes. This is done building a symmetrical distance matrix for each pair of classes where the elements on the diagonal correspond to the distance of each class to itself. Note that setting every other element in the conflict check matrix to zero degenerates into regular dart-throwing (first row of Figure 1). On the other extreme, treating samples as exclusive disks for all classes leads to the bottom row in Figure 1, where the  individual classes are not well distributed. The matrix generation method (refer to the paper for details) falls inbetween these two extremes, giving classes a priority according to their relevance on each region, and decreases the conflict radius with less-important classes (allowing for a higher density of these) so that the expected density of the combined result is proportional to the input reference.

Therefore the Hard-Disk Sampling method boils down to an easy extension of the original dart-throwing approach, where we check this carefully-built conflict matrix to decide whether or not to accept each randomly-generated color sample, producing results that sit inbetween the two extreme cases depicted in Figure 1. This sounds easy-enough to give it a shot with a quick prototype!

Hard-Disk Sampling implementation

Figure 2. Stippling result using a CMYK color basis with subtractive blending. The source image is “Portrait of Jack Nicholson” by Martin Schoeller. Click for full resolution version.


The first detail to bear in mind is that the minimum distance values for each class vary in space: in our stippling case, each individual pixel of the original image has potentially a different color distribution which translates into different importance for each component of the color basis we choose. The distance matrix for a 2D image will thus be 2N dimensional, where N is the number of point classes we consider.

How to choose the point classes depends on the look we’re after: we can use for instance Red, Green and Blue and combine them with additive blending over a black background to emulate a screen. Or we could use Cyan, Magenta, Yellow and Black over a white background to emulate print. Any combination will do, but obviously the more classes and the closer their respective colors are to the original image, the better match. I personally find the results of restrictive color basis like the two aforementioned to be more interesting, as they let you better appreciate how the result emerges from the combination of all the individual samples rather than from a perfect match of each individual sample.

The UI in the prototype provided will let you pick a color basis, which has a blending mode associated: (+) for additive, and (-) for subtractive. Then the density for each individual color class is extracted from each pixel on the source image by projecting the original RGB values to the elements of the chosen basis. I’ve tried different approaches to generate the per-channel densities, and what seems to be working fairly well is to take a projection of each source color into the color basis by doing the dot-product of their resulting vectors in RGB space.

\mathbf{S} = [S^R, S^G, S^B] = source pixel

\mathbf{C} = { [C_0^R, C_0^G, C_0^B], [C_1^R, C_1^G, C_1^B], ..., [C_{N-1}^R, C_{N-1}^G, C_{N-1}^B] } = desired N-class color basis

\mathbf{D_S} = [ D_0, D_1, ..., D_{n-1} ] = densites for \mathbf{S}, where:

P = \mathbf{S} \cdot \mathbf{C_i} is the projection of the input sample into the i-th color basis vector.
D_i = \frac{1}{P} for additive basis, and
D_i = \frac{1}{1-P} for subtractive basis

Note this is a non-linear response to intensity, which I found to work a bit better than the linear one.

Rendered by

Going back to the sampling issue shown in Figure 1, I was wondering: if for our particular case of stippling where we are interested in the combined result and not so much in the individual color channels, could we not get away with the simpler method of a uniform combined result, as shown in the second row, despite of the less uniform distribution of the individual color pigments?

To test that out it was easy to simplify the algorithm to calculate the distance matrix based solely on the brightness of the input pixels, disregarding class information. Then, when once a random sample had been accepted based exclusively on the spatial density, its class is chosen randomly from the available color basis according to a probability distribution matching the underlying pixel color.

Comparison between single class uniform distribution (left) taking only distances between samples into account, and hard disk sampling (right) producing a uniform distribution both per class and on the combined result. Note how the sharp edges around the eyes are defined similarly, but the results differ in the smooth areas.

Figure 3. Comparison between single class uniform distribution (left) taking only distances between samples into account, and hard disk sampling (right) producing a uniform distribution both per class and on the combined result. Note how the sharp edges around the eyes are defined similarly, but the results differ in the smooth areas. Click for a larger view.

With this test I was expecting to still get the good spatial distribution of a single-class dart throwing, and an on-average correct color: each individual sample would be off, but on average the sum of the samples should more or less match the tone of the underlying region. Figure 3 shows the comparison between the two methods, which differ much more than I was expecting. Truth is that in high-frequency regions the adaptive sampling predominates over the more even distribution of the former method – the difference is more evident in the smoother gradients on the image such as the clothing, where you can see that my naive simplification tends to produce noisier results. Having a uniform distribution for the combined result helps achieving good gradients of density, and having good per-class distribution produces more uniform, smoother tones, with less color clumps.


An interesting aspect of the Hard Disk Sampling method as proposed in Wei’s paper is that at the core of it, we still rely on random sampling to place points in the image. This is very fast at the beginning but once the image starts to fill up, it becomes increasingly unlikely for a sample to be accepted. More so, since for a given position on the image, we’ve prioritized the classes (the densities of each class), it may be extremely hard to place samples of faint tones in the image by just random chance. Recall from the original stippling post that this is the reason why Dart-throwing is so easy to implement, but also so slow for most practical uses.

A blue sample couldn't find a suitable place in the image by random chance, so it is 'forced' by removing much more likely green samples which will be reintroduced eventually.

Figure 4: A blue sample couldn’t find a suitable place in the image by random chance, so it is ‘forced’ by removing much more likely green samples which will be reintroduced eventually.

There are ways to be smarter about the way we place samples (other than rely on random chance), but the alternative method proposed in the paper is to discard existing samples after we’ve given up trying to place a new one. This is simple and ingenuous: we can think of this as having samples that settle down int he image, and from time to time we need to “shake” portions of the image to dust them off and allow room for improbable samples to settle. More rigurously though :) what we do is rely on the fact that the “probable” samples we remove to make room have a better chance to settle on the final image again, and the result will converge faster than if we expect the unlikely samples to find a suitable spot by randomness alone. This also leads to a better color rendition with less noise as you can see in Figure 3.


I wrote a small prototype application which does color stippling of an input image using the Hard Disk Sampling method described in this post. It will generate both images and XML data files which should be easy to parse and render with other applications.

The code is written in C# and is available as usual on GitHub at:
And you can also download the precompiled binaries directly from here.



[Wei 10] Li-Yi Wei. 2010. Multi-Class Blue Noise Sampling. Siggraph.

[Cook 86]  Cook, R. L. 1986. Stochastic sampling in computer graphics. ACM Transactions on Graphics 5, 1, 51–72.


Posted in programming.

Tagged with , , , , , , , .

Transmittance Function Maps

Wrath of The Titans

Scenes from Wrath of the Titans involve extremely dense volumetric objects. We adapt the Transmittance Function Mapping algorithm for high quality interactive previsualization and tuning of those media. (c) 2012 Warner Bros. Entertainment

It is not very often that I get a chance to talk about what I do in my day job. I am privileged to work with very smart people in great projects, but it is a fact that these projects are normally protected with teeth and claws, and that includes their technology. In fact I would normally keep the topics on this sketchbook deliberately away from possible overlaps to avoid breaking any confidentiality arrangement and also because there’s simply lots of other interesting topics I want to still look into and share!

I was fortunate to collaborate on a project with some colleagues at MPC, which ended up being published at DigiPro 2012, and therefore is public domain now. The text is on a bit more technical tone than I would normally like to have in these posts, but it has some pretty pictures which I think are worth skimming through.

The topic is about rendering of volumetric effects in real-time, including lighting and shadowing. We wanted to give our artists a tool to work with their fluid simulations without always requiring slow offline renders, thus reducing their iteration times.

The technical paper can be found here: Bringing Transmittance Function Maps to the Screen: Wrath of the Titans and Prometheus

Posted in misc.

Tagged with , , , .

Source code migration to GitHub

Happy New Year!

Just a quick post to write about the ongoing migration to GitHub and source code revamping I’m currently carrying out.

Since the beginning of this sketchbook it’s been my intention to provide with the source code of the projects with the hope that anyone finds it valuable as a complement for the articles. So far this had been done in a less-than-ideal way by attaching zip/rar files with the contents of a Visual Studio solution. Some other times this hasn’t been even possible since the code depended on my internal libraries and therefore it was hard to isolate. My intention is therefore to slowly migrate the existing code from SVN to git, reorganize the library dependencies as git submodules so this no longer becomes a limitation to distribute code, and upload the suitable projects to GitHub.

Another change will be moving to cross-platform development. So far I’ve been developing primary under Windows/VS, but this has become a limitation the times I’ve been asked about, for example, a Mac versions of a given plugin. Instead of having to compile the code for each version of Windows/Mac/Linux available out there, I’m removing the dependency on the IDE so that the interested people are able to build the code themselves (this will hopefully help finding bugs as well). For this I’m using CMake, which is a cross-platform build system which from a description file will be able to generate Visual Studio solutions under windows, GCC makefiles under Linux, and so on so forth. One exception to this will be the C# projects, for which I’m sticking to Windows for now.

So the GitHub repository can be found at – I’ll be fleshing it out in the next weeks, please let me know of any issue you may find.

Please bear with me as these are quite big changes and I’m also learning as I go :) hopefully it’ll be much better in the long run.


Posted in programming.

Tagged with , , , , , , .

Stippling and Blue Noise

It’s been a while since the last post, and truth is I’ve spent most of my spare time lately devoted to my photography. Back however to more technical matters, one of the topics I find fascinating is NPR techniques, and particularly Stippling as an interesting example of them. I’ve been reading about it for some time now, and learnt a couple of interesting things along the way, so I thought I would share them in a post.

When stippling an image, we’re basically interested in generating a distribution of points with a density distribution matching the tones of the original image, the size of the points is generally constant, and we add more or less of them to create darker or lighter tones. In these areas it is important to generate a distribution free of sampling artifacts, which reveal as patterns in the image –the eye is really good picking up these patterns-.

So essentially all we need is a big number of points distributed randomly, and a way of quantizing their required density when we place them over the image. To generate the points, the first approach would be using purely random points. It turns out, however that pure randomness does not lead to aesthetically pleasing results, as points tend to clutter in areas and leave empty spaces. On the other extreme, using a perfectly homogeneous scheme –say a grid- would lead to artifacts in the form of aliasing. We thus need something still random but still somewhat a bit more even.

Random pattern on the left, Blue noise on the right. Image from |Kopf 06|

In the in-between land there’s a flavor of noise called Blue Noise, which is the name given to a particular distribution of frequencies with “minimal low frequency components and no concentrated spikes in energy”. This means that we have no structural aliasing given by the low frequencies, and even spacing with minimal cluttering. It is generally accepted that blue noise properties make for some of the best sampling patterns for a wide range of applications (see [Lopez 10]) .

Noise patterns: the top image shows structural repetition. The middle one displays radial patterns and clustering. The third one displays blue noise characteristics. Images from |Kopf 06|

One way of generating point distributions with blue noise properties is by means of a Poisson-Disk Distribution [Cook 86]. One of the most straightforward ways of generating such distribution is by using a so called Dart-Throwing Algorithm [Cook 86], which generates candidate points in a random fashion over the plane, but will only accept each candidate if it lies at a minimum distance from every previous accepted sample. McCool and Fiume [McCool 82] describe a more practical variant
of dart throwing, where the dart radius is gradually decreased as more samples are placed. The Dart-Throwing algorithm is therefore inherently sequential and slow due to its quadratic complexity, but the quality of the results is nevertheless high.

An important property of any distributions generated with the aforementioned variation proposed by McCool and Fiume is its progressiveness: Similarly to some low-discrepancy sequences, if we put all the generated points into a list of N elements, and draw only a portion of that list [1, N < M], the resulting point distribution also covers the entire plane (as opposed to sweeping it), but simply does it with a lower density. This is important because it means that if we can stipple a spatial region with a given density by plotting all the points of a tile T, we will also be able to generate any lighter density by using the same T by plotting a smaller set of its points.

The amount of points we typically need to stipple an image, which is in the range of thousands in the best of cases, makes the straightforward implementation of the dart-throwing algorithm impracticable. A big part of the problem we’re facing with stippling is how to efficiently generate lots of points in a blue noise distribution. There is a significant amount of research on the matter, and the rest of the post will deal with a couple of approaches I liked and implemented myself, along with notes on related work.

Efficient generation of Poisson-Disk Point Distributions

To generate PD point distributions with blue noise properties there are several families of approaches, among which we have:

  • Incremental methods, which place samples sequentially. The Dart-Throwing algorithm described above is a basic example of them, but one could easily come up with more sophisticated implementations which made use of auxiliary structures such as hashing grids or trees to reduce the cost of the nearest-neighbor lookups. A proposal I found particularly interesting is described in the paper [Lopez 10], further details on the implementation of this method are provided below.
  • High-throughput methods: which usually rely a heavy precomputation stage but a very fast execution time. A prime example of this kind of algorithms is [Kopf 06], also implemented and described in this post. Also impressive results are obtained by [Wei 08] with a GPU-based Poisson-Disk distribution generation algorithm, orders of magnitude faster than the sequential approach.
  • Approaches based on Constrained Voronoi Diagrams (CVD), where each sample corresponds to a cell. These approaches, despite being extremely expensive in time, generate probably the most pleasing distributions. [Hongwei 09] report gains of 10x in execution time in their paper.

Constrained Voronoi Tessellation to generate samples. Image from the paper |Hongwei 09|

The differences between types of approaches are both quality (depicted as even, perceptually pleasing distributions) and execution time. The best results seem to be obtained with CVDs or sequential algorithms, but their execution times are orders of magnitude slower than high-throughput approaches such as the one based in Wang Tiles, which run on real-time. These later generate a more noisy result however, which as we’ll see, may not be so much aesthetically pleasing for the particular case of stippling – Nevertheless blue noise sampling may be used for a variety of applications such as object or texture placement as the authors show in the [Cohen 03], where the density of the distribution will likely be much less dense and speed is crucial.

Adaptive Incremental Sampling

The first one of the approaches I tried is the one described in the paper [Lopez 10], which is a more efficient extension of a sequential algorithm with good quality results and decent execution time.

The key idea behind the approach is simple: having a sample point on the plane, and a minimum acceptance distance limiting how close any other sample can get to it, we build a disk and generate all subsequent samples on its boundary*, instead of randomly on the plane. For each accepted new sample, we calculate the intersection between its disk and the currently active one, and subtract a portion of the active boundary area –so no new samples will overlap-. When there’s no remaining boundary on the active disk, we de-queue it and proceed with the next one.

This defines a pattern in how the samples are generated, as new samples are only generated on the currently active disk, and their corresponding disks queued for later processing. The samples then grow across the image from an initial seed (which it’s quite pretty to watch!). To speed-up disk overlapping checks, a secondary image buffer is carried, where disks are rasterized. In order to accept a sample we have to first check whether the corresponding rasterized disk would overlap a non-empty pixel in the disk buffer.

The acceptance distance for any sample is given by the density distribution, i.e. the amount of black, in the region covered by the sample on the source image

* More precisely, an angle is sampled on the remaining boundary of the active disk. The new disk will be placed on the line that parts from the center along the direction defined by this angle, at a distance given by the image densities of both the active and the new disk.

Stippled image generated with the AIS algorithm. Click for full resolution image.

Implementation notes

As part of the test application provided at the end of this post, I worked on an implementation of the AIS algorithm using C#. Despite the running times I’m getting are not nearly as fast as the ones reported in the paper, they’re not too bad for the quality. Although this is conveniently omitted in the paper, care has to be taken when implementing the disk adjustment heuristic to deal with sub-pixel disks, especially as they tend to make the process diverge, and  may even get the algorithm stuck due to the inability of the disk image buffer to handle disks smaller than 1 pixel. I found using a correct disk area calculation (as opposed to a squared window approximation) to produce less artifacts albeit being slightly slower. Dark images such as the zebra displayed in the screenshot below are remarkably difficult to handle robustly due to the high contrast and sample densities required

This being said, the results look pretty nice in most cases. There’s a free parameter which controls the overall density of the stippled image – not every image is suited for a fixed value of this parameter, and tinkering with it helps achieving optimal results.

Recursive Wang-Tiles

The second approach I tried belongs to the high-throughput family described above. This algorithm relies on a heavy precomputation stage to generate an auxiliary structure called Wang Tiles used in combination of PD distributions to achieve fast running times.

Wang Tiles [Cohen 03] are a construction which enables arranging a finite set of elements (the tiles) in a non-periodic (pseudorandom) sequence to cover the plane. The tiles are squares which have a color assigned to each one of its four edges. The algorithm states that a tile can only be placed if the color of its edges matches those of the surrounding tiles that have been previously placed. For any set of tiles which elements fulfill the required restrictions, an element is chosen randomly.

Set of Wang Tiles arranged in a valid configuration. Image from |Cohen 03|

We start with a tile and sequentially insert random tiles from left to right, simply ensuring that the left edge color on each newly placed tile matches the right edge color of the last one. Once we’ve reached the right bound of the area to cover, we continue building a new row, now matching the left and top edge with the tiles from the previous row. This is righteously called scanline algorithm.

We start with a fixed set of tiles, and place them satisfying 1 or 2 restrictions depending on how many neighbor edges we need to match at any given time. For C possible edge colors, we’ll need at least N*C2, N>=1 tiles available to satisfy every restriction on the top and left edges. N determines the number of available choices for any placement, from where we chose using a uniformly distributed random number.

What it’s interesting from the sequence generated by the aforementioned algorithm is its aperiodicity, its lack of structure or repetitive patterns.

Aperiodic sequence generated from a finite set of tiles. Image from |Cohen 03|

Now what do we put in the tiles? Wang Tiles have been used to synthesize textures from samples; we’ll use them to cover the plane with a noise pattern, generated from a finite, relatively small, set of PD distributions. This process is described in the paper [Kopf 06]. Having this, all we need to do at runtime is execute the scanline algorithm to cover the desired image area with tiles from the precomputed set.

Texture synthesis from samples. Image from |Cohen 03|

Noise synthesis from samples. Image from |Cohen 03|

Before that, there are a couple more details to worry about: Edge seams and recursiveness.

Edge Seams

By simply choosing random squared samples of textures or noise patterns and then putting those together next to each, we won’t guarantee that the seams match. We need to build the tiles in such a way that, whenever the edge colors on the opposite edges match, so do the image on the tile.

To do this, the authors propose choosing square samples for the colors themselves, and then combining those sample colors to build each tile (4 source images for each tile) as depicted in the figure below:

Tile merging: a source distribution is generated for each tile, and then merged with 4 additional distribution corresponding to each edge color. This will ensure the contents of adjacent tiles match on the boundaries. Image from |Kopf 06|

This way, since each matching edge comes from the same image, they will align properly.

For the case of blue noise samples however, things get a bit more involved: we not only need the noise patterns to match on the edges, but we also need the resulting merged tiles to preserve their PD properties: we don’t want merged samples to form clusters. The authors of [Kopf 06] propose making use of a Voronoi diagram containing all the samples to be merged, and then build a seam from the path that minimizes a cost function which penalizes nearby samples. Put in plain words, this means merging the source PD distributions by cutting along an irregular seam passing by those areas where points are more far away on average.

Minimum cost seam generated from traversing the Voronoi boundaries. Edge cost is shown in red.

After this stage we’ll have a set of tiles that we can blindly arrange according to the scanline algorithm to generate a sequence which will also preserve PD distribution properties. Note that generating the Wang Tiles is the bulk of the work in this algorithm, but it’s a one off process. After that, we’ll be able to cover as much area as we want with blue noise without having to run a sequential dart throwing algorithm at runtime.


The second issue we have to worry about is: we know our source tiles are progressive, and therefore plotting a subset of them produces a less dense, but equally filling noise distribution. We use this property to generate light shades on our stippled image – but if we happen to require more density than the maximum our tiles’ distribution can provide, we won’t have enough points!

We thus need to make the tiles recursive: allowing any tile to be subdivided into a set of NxN smaller tiles (built following the same edge-matching rules) which adds up to N2 more points than the source tile.

Tile subdivision. Image from |Kopf 06|

Recursiveness along with Progressiveness are the properties that will allow us achieving arbitrarily dark or light shades in the stippled image.

Putting it all together

The stippling algorithm based on Recursive Wang Tiles is based in the following steps:


  1. Let C be the number of edge colors, generate a complete set of N*C2 source PD point distributions.
  2. Generate C more PD distributions, one for each edge
  3. Build the complete set of unique N*C2 Wang Tiles by merging each source distribution with the 4 edge distributions. Note that the original source distribution won’t match on the seams, but these merged ones will.

At runtime: Cover the source image with a non-periodic sequence of the pre-generated tiles; for each tile, decide whether to insert each point based on the required image density. If we’ve inserted all the points on a tile distribution and we still require more density, subdivide the tile.


Stippled image generated using the Recursive Wang Tiles method. Click for full size.

Note that the image generated with this method is somewhat noisier than the result achieved with AIS, however if you run the provided software you’ll notice how much faster this method is (excluding the preprocessing, that is).


To finish the post, I wanted to provide a hint of what I consider a failed test either as a note to self, or in case anybody reading this is willing to come up with an idea to make it work in the future :)

AIS extended to color. Pointillism?

Despite personally finding the monochrome stippled images to be beautiful, one trivial extension I thought of was extending the AIS algorithm to plot colored disks by averaging the underlying image pixels. I was aiming for a somewhat pointillist effect, which kind of reminds of. However the image doesn’t completely make it for me because the approach is basing the stipples (the strokes in this color version) exclusively on the darkness of the image, completely ignoring the homogeneity on the tone – this will prevent dark broad strokes from appearing on the result no matter what. Obviously that’s what the stippling algorithm is designed to do: generate dark shades from dense clustering of points, but I do think something like this could be achieved:


An application implementing the Adaptive Incremental Sampling and Recursive Wang Tiles algorithms described in this post is provided. The source code is written in C# under Visual Studio 2010. If you just want to run it, there’s binaries already built, along with a pre-generated  set of Wang Tiles to use included in the compressed file.

Precompiled binaries:

Source code repository on GitHub:

Stippling application: AIS and Wang Tiles approaches available

Stippling application: Wang Tiles generator, with (optionally) detailed step breakdown.


[Cook 86]  Cook, R. L. 1986. Stochastic sampling in computer graphics. ACM Transactions on Graphics 5, 1, 51–72.

[McCool 82] McCool, M., and Fiume, E. 1992. Hierarchical poisson disk sampling distributions. In Proc. Graphics interface ’92, 94–105.

[Kopf 06] Johannes Kopf and Daniel Cohen-Or and Oliver Deussen and Dani Lischinski. Recursive Wang Tiles for Real-Time Blue Noise. ACM Transactions on Graphics (Proceedings of SIGGRAPH 2006).

[Hongwei 09] Hongwei Li, Diego Nehab, Li-Yi Wei, Pedro Sander, and Chi-Wing Fu. Fast Capacity Constrained Voronoi Tessellation.

[Cohen 03] Michael F Cohen, Jonathan Shade, Stefan Hiller, Oliver Deussen. Wang Tiles for Image and Texture Generation. ACM Transactions on Graphics 22 (3) p. 287.

[Lopez 10] Ignacio Ascencio-Lopez and Oscar Meruvia-Pastor and Hugo Hidalgo-Silva. Adaptive Incremental Stippling using the Poisson-Disk Distribution. Journal of graphics, gpu, and game tools.

[Wei 08] Li-Yi Wei. Parallel Poisson Disk Sampling, in SIGGRAPH 2008, Association for Computing Machinery, Inc., March 2008

Posted in programming.

Tagged with , , , , , , , , .

On Mesh Sampling

Fig 1. Sampling the surface

It’s not unusual that a given algorithm requires values or positions to be sampled from a geometric mesh – some of the projects described in past entries of this blog are examples of it: the Weathering Simulation and Space Colonization Algorithm required points to be placed on the surface of a mesh, wereas the Voronoi Shattering seeds the fragments with positions obtained along the volume of a source object.

In this post I’ll be providing more detail on the implementation of those two kinds of samplers, and at the end of the post you can find the link to some source code which should hopefully help complementing the description.

Sampling the surface of a mesh

Placing random samples along the triangles of a polygonal mesh is the easiest case. The process consists in simply chosing a random triangle, and then generate a point within its bounds until we’ve reached a desired number of samples.

Sampling a triangle can be done using barycentric coordinates as depicted in Figure 1. We start with two variables u and v with random values between 0 and 1 such that u + v ≤ 1 (for this we can keep generating values until the condition is met, or even quicker, just pick the complementary value u = 1 – u, v = 1 – v when u + v > 1). The resulting point is given by the weighed sum of the triangle vertices such as:

s = u*A + v*B + w*C = u*A + v*B + (1 - (u+v)) * C

In order to guarantee a more uniform coverage of the samples, it’s better to weight the probability for a triangle to be sampled according to its area, so that larger triangles will be sampled more often and hold more samples along their surface. We can use the cumulative distribution to pick elements from the triangle array with the following simple steps:

1. Calculate the area of every triangle and store it into an array.

E.g. [ 0.1, 0.4, 0.5, 0.2 ]

2. Find the smaller value on the array and divide every element by it, rounding to the next integer. The array is now holding how proportionally larger each triangle is compared to the smallest one.

ceil( [ 0.1, 0.4, 0.5, 0.2 ] / 0.1) = [ 1, 4, 5, 2 ]

3. Add up the elements of the array and initialize a second array of that length, so that the index i of a triangle is repeated as many times as it’s corresponding values on the first array:

A1: [ 1, 4, 5, 2 ]

A2: [ 0, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3 ]

4. All that’s left is picking a random element within A2, t = A2[ random(0,1) * length(A2) ] as our triangle index.

Sampling the volume of a mesh

Things get a bit more involved when we want to place samples not just on the surface but also in the volume enclosed by the object. I’ve personally been using a couple of alternatives with different pros and cons that I found useful, an implementation as maya nodes can be found in the provided source code :)

Using ray intersections

A very straightforward way of sampling a volume is the one illustrated in Figure 2: we begin by obtaining the bounds of the mesh and then iterativelly shoot rays joining two random points on opposite faces.

Fig 2. Sampler by Raymarching

Assuming the mesh is closed, each ray will intersect the geometry in a number of segments (red dashed lines) each defined by an entry and an exit point. We’ll generate samples for each segment, putting care in trying to preserve an overall sampling density #samples on segment = length segment * density, which in the implementation is defined as a “linear density factor” obtained from the bounding box volume and the number of samples requested. Rinse and repeat until we have enough samples.


The pros of this approach is that it fits the shape perfectly since we can precisely obtain the entry and exit points for each ray. Maya provides built-in facilities for retrieving all the intersections on the MFnMesh class by using an internal grid acceleration structure (which is fair enough for moderate meshes). The downsides of this technique are that it may be a bit slow, depending on our requirements, and that the distribution is far from ideal for few samples – although it tends to converge decently when the number of requested samples increases. I found it useful to limit the number of samples a given segment can contribute with to enforce the algorithm shooting more rays and therefore increasing the density of segments instead of the number of samples along each segment.

Posted in programming.

Tagged with , , , , , .