boss3D
21-08-2003 22:29 hem
продолжение..
Комментарии:
21-08-2003 22:29
boss_3d
22. Excluding Geometry from G.I.

1) There is no way to exclude geometry from GI at this stage. Everything you can see, if you look in the right direction, has an effect on your GI.

2) It is possible quite easily to do it just by using secondary bounces and caustics. However, this is slow. Though properly configured glass "should not" require major adjustments to ambient lighting - it should let most of the light through. While you certainly take a speed hit from tracing reflection and refraction rays around your glass (with standard settings, many rays will be shot, and this *hurts*, you can dramatically reduce it and get very similar results simply through:

a) Cranking reflective and refractive bounces down. Lots. Meaning reflect goes to 0, and refract to maybe 4. That will mean you shoot a "tiny fraction" of the rays, but have a very minor difference in result (unless the rest of your room is loaded with complex refractive objects or something.

b) You can use the fast thin glass approximation, and cut down the complexity of the situation even more. Once you do that, you should find your render times go up relatively little compared to empty window boxes, and you get realistic, funky glass. Unless your camera is like 1 centimeter away from it at a weird angle.



23. Transparent Shadows from Glass

Sorry, there was some bad info there... after a little experimenting, basically this is the situation:

If you stick a sheet of nice glass in a window for a scene, it will cast black shadows, whatever shadow type (raytraced or shad. map). If you hit exclude, it will work fine for shad map shadows, but not for raytraced ones - so you can do those scenes - just use shadow maps and exclude. And if they're too blurry, bump the resolution up. And they work fine with GI if you set it up right.



24. C.S.G. Info

A C.S.G. primitive is a primitive that is defined by an equation instead of a polygonal representation. For example, a sphere can be defined as a mathematical formula instead of a series of polygons. They have many advantages, including formulas are usually smaller and more memory efficient than large databases of polygonal points, they can remain smooth no matter how close you get to them, and in a lot of cases they're very fast to intersect a ray with (what a raytracer does), which means you can use millions of the onscreen with a much smaller hit in speed than using polygons.

However, some mathematical objects are very slow to calculate intersections with, so you actually get much higher performance with them as polys - you *can* render them using direct intersection, but it is a path for pain...

Also, some mathematical forms not defined through iterations can be extremely hard, if not impossible to solve for...

And of course, many things that *are* iterated can be rendered - there are renders of 3D Mandelbrot sets (actually, POV-Ray can do this), that use quaternions (which are like 3D imaginary matrices, but not - it's like an imaginary 3D math) to generalize the Mandelbrot set into 3D space - if you take a certain slice of it, you get the standard 2D Mandelbrot set. And those typically use algorithms where you set a given error tolerance, and it takes longer accordingly - and if you clamp that properly, you can make sure your accuracy corresponds correctly with your sampling rate and you *never* have visible error *or* spend any time computing stuff that isn't significant.

On the topic of directly rendering meshsmooths, I believe someone (Henrik Wann Jensen, the cool photon maps/3S guy, IIRC, though I'm probably wrong) proved that basically you can decompose subdivision surfaces into solvable parametric things. And there are lots of words like "eigenfunctions" and so forth flying around, and I'm not so hot on the math, so I may have got a lot of the previous part wrong, but basically whether or not a mathematical thing can be easily directly raytraced varies a lot with individual cases, and I believe subdiv surfaces *can* directly solve for ray intersections.

An interesting note is that POV-Ray does not use exact root solvers for it's patches, lathes, etc, so I think it's either not quite possible to deterministically solve for them, or really slow. But this is all kind of speculatory inferences, "I think this is what I vaguely remember" stuff, so the resident math gurus can probably help...

One neat algorithm for deterministically solving patches to a given error tolerance (it's a weird one really, but quite fast - it's basically an iterative mathematical technique, but acts almost like micropolys (but not quite)) is basically this:

1) Decide which sample you are taking, and it's error tolerance (say 1/4 of a pixel, if you are taking 2x2 samples per pixel).

2) Subdivide patch

3) Pick subdivided section that is vaguely in the right direction of ray.

4) If a subdiv section is around about down to the size of your sample area, you are accurate enough, stop, and say "this is the distance of my little slice of patch here", otherwise, go to 2)

Of course, that doesn't handle no collision situations, cases where patches fold back on themselves, bounding box optimizations, etc - it's not really and implementation grade description at all... but it's an interesting solution to the problem, if a pretty inefficient one.

Analytically raytracing stuff more complex than spheres, planes, etc. is a very cool feature, but the theory can be a real bitch.



25. H.D.R.I. vs Manual Lighting

Properly processed HDR images used for illumination will always provide realistic illumination - actual, real-world illumination data is used for reflections/shading.

Manual light setups can be very realistic too, properly tweaked. In fact, manual lighting by an expert can be essentially indistinguishable from H.D.R.I., subjectively.

The key word here is control. H.D.R.I. gives instant, canned, realistic lighting (though not renderings - realistic materials etc are still important), but you have essentially no control. If you say "I want more light here, and less here", it is very difficult to achieve without stuffing around with negative lights and image editing H.D.R.I. (which is difficult, if not impossible, because HDR Shop has no painting features - it's designed for processing H.D.R.I. not retouching them). Manual lighting gives full and simple control. If you want a different lighting effect, it is relatively easy to set up, once you have learned how to light effectively. Also, manual lights are generally much faster to render.

Basically, from an artistic point of view, manual lighting is generally better - more work, but it pays off in quality. Skylighting can be useful for getting nice fill light (and is much more controllable than H.D.R.I. images, which tend to have extreme bright points). Often, an excellent solution can be a combination - set your white point relatively low, and have a low intensity on your H.D.R.I. map, in your skylight slot, to get nicely shaded fill colours, then play with output curves to get the tint exactly right, then add plenty of point lights around the place to tune your key lighting points - that way you can control the colour and position of your bright lights with a lot of precision, but basic fill stuff is done for you.

Also note there is a great deal of ambiguity in many peoples' understanding of what exactly H.D.R.I. is. H.D.R.I. is simply storing extra info on intensity in bitmaps, period. There is nothing intrinsically linked to lighting in H.D.R.I., apart from the fact that this extra intensity precision and range can be valuable for it. From the context, I inferred that you were referring to skylight global illumination based off H.D.R.I. images, but it's actually very ambiguous. For example, if you said "is H.D.R.I. more realistic", without talking about normal lighting, I would not know if you meant it was superior to standard bitmaps (the logical conclusion), better for illumination, better for reflections/refractions, etc.

In general, as I said, H.D.R.I. based GI is more technically accurate than point lights, but since pure technical accuracy is usually not the goal at all, it is artistry and/or *subjective* realism (which is another beast entirely), so usually a blend of the two or pure point lights works better.

It is also possible that by H.D.R.I. you meant GI (which is completely different to H.D.R.I., but people get them mixed up a great deal), and in that case my response is, yes, GI is typically more realistic. It gives minimal reduction in artistic control, and is often a rapid route to stunning illumination realism. If GI doesn't pan out quite as desired, it is simple to turn the GI multiplier down a little and add a few point lights (possible negative) to fine-tune with lots of artistic control, a great deal more simplicity, and far more speed of creation (and minimal speed impact on rendering, because dimmer GI lowers required sampling rates).



26. H.D.R.I. Plugin Info

Max's internal image handling is in 16 bits per channel, where H.D.R.I. are in 32bit per channel floating point (that is, instead of a colour being 0-255, it is any number you like, -infinity to infinity, effectively.).

Because Max can't handle floating point images, the H.D.R. image needs to be clamped at a white point.

To do this, load the image into a bitmap map, and where the file open dialog is up, click "setup". There you will find a histogram of the H.D.R.I. (a logarithmic one, due to the large range of numbers), and clamping controls similar to "levels" in Photoshop. Simply move the white point until there are no purple bits (signifying "clipped" colours, where there will be data loss if clamped). Then copy the value for the linear white point (the larger number), and paste it into the RGB output part of the bitmap map.

Careful, though - with some H.D.R.I. with a very large range (like the St Peter's one) there will be banding. So you'll have to clamp it lower, and put up with inaccuracy.

21-08-2003 22:30
boss_3d
27. Making high-resolution H.D.R.I. Maps

The HDRI plugin allows You to write out HDRI images as well, although this is in its early stages (combined with Brazil anyway). There's a few caveats, so please follow these steps:

1. Make your scene and make sure that anything which should be the brightest in your scene ends up being white. Your render at this point would look pretty dark. It's supposed to.

2. Render out to an RPF file (No, not HDRI... try it, and You'll see why). Enable ALL the G-Buffer options in there. It's just a few you actually need, but having them all enabled prevents any confusion.

3. Now, in the Brazil VFB, hit the save button, and choose the HDRI format. You'll get two options.. H.D.R. or L.D.R. Take Your pick, but I suggest H.D.R.

And there you go... one H.D.R.I. image. You can now load this image into the material editor with the same H.D.R.I. image, adjust the exposure stuff, etc. Of course, this isn't a panoramic image, and there are some other issues You should heed that are being/will be looked at. But for most simple H.D.R.I. image renderings it should suffice.



28. Tuning Reflections

There are some issues to consider when tuning reflections...

To start with, fresnel reflections, while physically correct, great, and too often unknown to people making materials, are not how all materials reflect. Many materials do, notably glass, water, ceramics etc, and for these materials, fresnel reflections are a must. However, some materials do not reflect in a fresnel fashion. Metals do not have fresnel reflectance properties (I believe fresnel reflectance applies only to dielectric materials, that is, not metals).

In addition to this, fresnel isn't a single kind of reflection that magically gives you physical correctness. Instead, it is a varying property, that changes with material I.O.R. (index of refraction). Fresnel reflections can vary very widely with changing I.O.R. - some I.O.R.s have little reflection except at glancing angles, and some have actually got more intense reflections not on glancing angles.

Anyway, it's a bit of a complex issue, but primarily, you should pick reflectance type based on your material type.

Very reflective materials, with mirror like properties, like mirrors and metals and so forth, should be created more through consistently intense reflections. That means fully reflective, highly reflective. Sometimes it's more realistic to use a light grey, because almost nothing reflects light in a completely efficient way - in fact, if something reflects all light and other incoming radiation, it has a tendency to become extraordinarily cold - or a tinted colour. So if you are trying to make a gold material, try a gold tinted reflection colour.

Plastics, water, glass, ceramics, diamond, etc. should use fresnel reflections with the appropriate I.O.R. - plastics vary a fair bit, choose a good-looking one usually between around about 1 and 3 or 4. Water has an IOR of 1.333, glass of a varying amount, depending on the type of glass, but usually in the vague region of 1.6, though it can be below 1.5 and above 2 (1.6 usually looks great for glass renders), ceramics are tuned similarly to plastics - the difference is largely in the colouration and increased subtlety of diffuse illumination in ceramics. Diamond is in the 2 sort of range, ~2.4. The MAX docs have a great I.O.R. list, both in the printed manuals and in the online help - do a search for I.O.R. in the help file.

Now, onto tinting:

Water, while often looking really blue, is mostly blue because it is reflecting the sky. Sure, it has a tendency to filter out non-blue wavelengths of light, but it is pretty colourless. Your glass of water doesn't look blue at all, and if it did, you would probably think there was blue gunk in it. So make your water either completely colourless or massively subtlety tinted - I mean a oh-so-slightly blue shifted colour.

Glass should have their refraction colour tinted to whatever colour you want the glass to be - so a green glass should have a green refract colour. I'm afraid I don't remember if coloured glass tints its reflections too, but I don't know that it does - I think its only light traveling through glass that gets tinted. And clear glass is basically identical to water, but the I.O.R. is different (1.6 or so instead of 1.333).

Ceramics and plastics have reflections that are the result of the glaze/clear surface layer, so green plastic does not have green tinted reflections - the colour is unmodified by the greenness, and the green effect is all from the diffuse underlayer. So make your reflect colour clear, and your diffuse colour the colour of your plastic. If doing glass, make your diffuse colour very bright, and probably a fluorescent colour. Ceramics are usually a little dimmer, and act less as diffuse shading, but the main difference for trying to construct a shader approximation is that their diffuse shading is less colour saturated.

One of the major differences between the appearance of metals and other objects, as well as fresnel properties, is this reflection tinting:

Most nonmetals don't alter their reflection colours, but metals tint them.

So blue plastic has colourless reflections and blue diffuse, and blue metal has no diffuse, since it its appearance is completely from mostly focused reflections, which are tinted blue.



29. Checking G.I. Error

A great way to check how much error your G.I. has (and thus blotchiness/flickering) is to use the "lock G.I. shade rate to image rate" checkbox and render without antialising. Every pixel will use a new G.I. result instead of interpolating, so you can see exactly how much "wobble" there is on parts that should be smooth. This works just fine on small images. For really blotch free images and flicker-free animations, only a fine grain should be visible - anything stronger than that will be blotchy. Even if it doesn't look like that in a render, the blotches are there, you just can't see them easily because they are masked or subtle - compare a render that seems OK to one tuned using this technique and the tuned one often looks a fair bit cleaner.



30. Rendering Buckets Info

Every time Brazil renders a bucket, it has to do a few things to keep track of it, allocate/de-allocate memory etc.

So if there is nothing in the buckets (a plane white scene, no objects/lights), then the buckets basically render almost instantly, and the time taken is practically all from keeping track of them hence the bucket size isn't very important to the render time, and it's practically all from the number of buckets.

With scenes with objects, you still have all this bureaucracy going on for each bucket, so you have to track all that, but you also have to handle the actual rendering of the scene - the nitty-gritty of intersecting rays with objects, shading them, etc., which no longer takes a negligible time.

So that means you still mostly get faster renders with bigger buckets - you have to do less per-bucket tracking work, but since you are spending more time on actual rendering tasks, the difference isn't as big in a fairly simple scene like that, you still have a lot of time spent handling buckets compared to the raytracing. But if you have a super-heavy scene with D.O.F., complex materials, loads of lights, huge amounts of reflections, etc., it still spends the same amount of time for each bucket - so it becomes insignificant as you go for more complex scenes.

If you look, the differences between times, changing only buckets are similar:

For the empty scene:
2 sec going down to 64
14 sec going down to 32
80 sec going down to 16

For the object scene:
3 sec going down to 64
19 sec going down to 32
69 sec going down to 16

Sure, there are some differences, and that's down to natural time variation and a few other things, but they are mostly the same. And you can see that these time shifts with bucket sizes are based on bucket numbers - if you make a bucket half the vertical and horizontal size, you have four times as many, and it takes about 4 times longer at the bucket computations as a result.

Now, there are a few other factors affected by bucket size, notably, if you have a heavy scene using lots of memory or high powered antialiasing settings, bigger buckets will cause you to swap your butt off and render slowly, but generally, bigger buckets = faster renders. The main reasons to use smaller buckets is that you can see what is being rendered faster, important for preview renders, and because sometimes big buckets are very inefficient - as I said, mostly with highly sampled heavy scenes... and those cellular priority map renders look way cooler with teeny buckets. It is a mostly constant overhead that doesn't scale proportional to scene complexity, so it really doesn't matter too much, though for more time critical renders do use 64 or 128 buckets instead of 16 ones.

It would be nice to see in Brazil a feature that takes into account G.I., aliasing, and bucket size parameters and gives you an approximate bucket incurred space requirement, so that you could make sure heavily sampled scenes weren't using too much memory for them but were using as efficient a bucket size as possible.



31. Antialiasing Filters Info New

Well, filtering is a complex science, and basically there is no perfect filter - different filters do different things.

Some filters are very good at getting rid of nasty high frequencies, while preserving lower ones (like the very cool sinc and Lanczos filters, which MAX does not have (in R3, anyway)), but have nasty properties like wide support (which basically mean they are "fat" - they cover a fair bit of screen space), and ringing (a kind of "intensity bounce" - they don't smoothly go over edges but overshoot the shifts - this can be nice for increasing the perceived sharpness of the image, but excessive ringing looks gross).

From a less technical standpoint there is no best filter, there is only the one that is best for the job.

For a very soft, fuzzy effect, something like the Gaussian filter can actually work ok - but you are probably best going with things like a proper camera blur effect and sticking with the clarity but nonharshness of something like Mit-Net.

For a nice, clear filter, Mit-Net works great. I'm talking about it's default settings here, of blurring and ringing - Mit-Net is not a single filter but a filter function - it isn't a certain way of interpreting sampled pixels but a system for generating that way of interpreting sampled pixels determined by input values. Mitchell and Netravali, the researchers who devised the filter did a whole lot of research and picked the default params of 1/3, 1/3 as a good balance. Mit-Net is very nice from an implementation point of view - it doesn't need lots of sampling to look good.

For an effect of harshness, sharpness, and extreme clarity try Cat-Rom. It creates an effect that is often maybe a little too harsh, but for a lot of purposes gives a great effect.

For a very nice filter that is hairy to implement and run so that it looks good and is fast, take the sinc filter. It looks clearer than Mit-Net but doesn't create a harsh image. But it actually has nonzero values off to infinity. So you have to approximate the whole filter and just take a limited part of the sinc filter, which is still a little hairy to compute and damn sensitive to error. And this limited part taking affects the quality of your filtering - you need a decent sized chunk for good quality, otherwise it looks pretty bad. The Lanczos filter is a lot like the sinc filter, but modified so it has not quite as good characteristics but is much easier to cut off nicely.

In MAX, try Mit-Net for general purpose work and Cat-Rom for harsh clarity.

Brazil may or may not end up with a sinc filtering option (would be very cool though), and if it does, use that for final renders with good sample rate settings - it's generally a better filter, but a lot more pernickety.

Now all this applies to standard display systems - systems that display in a simple kind of way - film, most computer screens, and print. TV screens are different because they have a low rate interleaved refresh - every other line is drawn, as you know from field rendering. This means the refresh rate of a white, one-field line ends up as basically half that of a white two-field line or a white screen. So strong intensity variations that doesn't span more than 1 vertical line look gross - they flicker their butts off. The idea of the video filter is to avoid this by blurring the hell out of your pic so it is spread across vertical lines. Result: less flicker, fuzzy pics.

It's a tradeoff - you may find that scenes rendered in Mit-Net look great on a TV scene, or flicker like hell. In reality, most TV sets have reasonably blurry images - while the extra blurring of something like the Video filter is painfully noticeably on computer screen, on TV, you would probably have a much harder time telling the difference. TV is lo-fi. But it seems to me that you are best off using Mit-Net if you can get away with it, since it will give a clearer effect on those systems with the quality for it and look much nicer on things like DVD transfers, and if it flickers too much when you test it, go for Video. Or a blurry Gaussian or something. Final tip: don't use filters with much ringing for video. This includes Mit-Net with ringing cranked up (defaults should be fine), Cat-Rom and sinc.



32. Reducing Grain in H.D.R.I. Lighting New

One would need quite high values to get rid of most of the grain which becomes significantly worse when using H.D.R.I. lighting, or to just smooth it out. Just think about how that light from the H.D.R.I. map is cast to the scene. Its not one solid color, like when you use skylight, but a lot of different colors. This extra variation is what leads to the grain. You can often improve on this a lot without impacting on rendertime too much by lowering your whitepoint. Basically, this reduces the variation, but at the expense of making the render looking dimmer (which is fixed by adjusting multipliers until it's about as bright as it was before), and making you lose a little of the contrast and detail from the environment (the same contrast in intensity that gives you grain also gives you clear shadows - just like making an area light bigger and dimmer so the same overall brightness is contributed, but shadows are fuzzier).

So there's a tradeoff - mostly, the lighting is damn even anyway - so you can clip your intensity, and it looks as good and renders with much less grain - but sometimes it doesn't look as good when you do that. Just play around. Try different lowered white points and multipliers so you get the effect you want at the fastest time possible. For high contrast environments, this can make a huge difference in rendertime.

Another solution would be to blur the H.D.R.I. map. Probably the best way to do this is by making a separate blurred H.D.R.I. image, rather than blurring it in Max (Max's blur seems to darken it a bit, probably due to it not operating in 32 bits.). Blurring the image will smear out the sharp bits, so the light stays the same average intensity, but is not as sharp in the bright bits. This will drastically reduce grain and give softer shadows (if they are desired). Changing the white point tends to darken the image, as it cuts out light rather than averaging it. For a really nice result, try a blurred H.D.R.I. for skylighting, but the original map for environment (a.k.a. reflections). This will look pretty much the same as if you opted for the slower rendertime.

Закрыть