boss3D
21-08-2003 22:08 hem
speed tips 2 brazil
>
>
не помню где взял, склероз наверно.. инфа на англ в комментах
Комментарии:
21-08-2003 22:10
boss_3d
01. Speed Tips I

Just some quick advice for getting things faster:

1) Go to the Ray Server Params rollup panel, press the ... button, hit the Max Speed preset button. Close panel. I caught Neil. He's been rendering all his images without doing that. Images nearly always render faster as long as you have the memory when using Max Performance. Play with the acceleration controls. See how the given presets are interpreted into values in the ugrid settings. Change the values and well..get crazy. I have no idea anymore after the last round of changes what the true optimum setup is.

2) Keep your materials fast. Don't use layers of procedural textures. Keep the bump map especially as fast as possible. While this is always good, it is particularly important for Brazil at this stage. Supersampling is expensive.

3) Don't use Max's standard raytrace lights EVER. (NOTE: This was at a time the raytrace shadow maps still were computed by MAX when rendered with Brazil. Brazil now renders Raytrace shadows internally, and is damn fast.)

4) When using global illumination, lighter colored objects render slower than darker objects.



02. Quickshade Info

Without Quickshade the bitmaps or procedural maps on the object are evaluated for every bounce to tell Brazil what color the light bouncing off of the surface is.

With Quickshade you can tell the Luma Server to only evaluate the bitmaps for "n" number of bounces, where "n" = the number set in the "Start Depth" field underneath the "Surface Shading" menu.

With Quickshade on and Start Depth set to 1 only the first bounce will evaluate the image map. Every bounce after that will evaluate the material diffuse color only. For best results make your diffuse color as close as possible to your bitmap.

This will give you a slightly faster render time.

Why ? I can only guess, but it probably is easier to say: "This surface is x color at every pixel" than to look at the bitmap for every pixel.



03. Sharp Shadow Maps

If you want sharply defined shadows on standard shadow maps:

Try increasing the density (shadow parameters) to something like 15 or 20.

Increase the size (shadow map parameters) to 1024 or 2048, and the sample range (shadow map parameters) to 8 or 12.



04. Rendering in Passes

You can render a direct diffuse illumination pass, an indirect pass, a self-illuminated texture pass, and a specular pass, and do it all in a compositor.

Simply add the light passes, multiply in the texture pass, then composite the specular however you choose (some use additive, I use a mix of additive and lumakeyed).

If using reflection or glass, you'll need a final pass for that. Gives great control...and you can blur the indirect pass AND render it at lower res in many cases.

I've done a lot of testing with this technique, and in some ways this is what Brazil does internally. This is how a mulitpass component bucket renderer works internally, and why splitting out these passes will be trivial in future versions.



05. Geometry Lights

Here's the process, in order for an object to become a light source in brazil.

Use a standard material, not the Brazil test material, ALWAYS make the material self illuminating.

Now the trick here is you want the object to give off more light and in order to do this you want to increase the output. This can be done a few ways. If you want just a straight white light you can use an output map in the diffuse slot (The output map is basically the same thing as the output rollout of most of the map types) and set the RGB level to around 5. This is not a strict setting and can be increased or decreased depending on the scene. 5 is just a fairly good starting point. You can also use any type of map that has output on it to give off light. For example, put a bitmap in the diffuse slot and increase the RGB level in the output rollout.

Want to have a colored light? Instead of the spinner use a color to make the map self illuminating. Place a mix map in the diffuse slot and change the color in slot one to the color you would like the light to be and increase the RGB level in the Output rollout. Leave the mix amount at 0. This will use the first color slot and ignore the second. Basically your just using the mix map for the color slot and the output rollout.

The other thing you have to remember is that the secondary illumination must be turned on in the Brazil Renderers Luma Server settings. Otherwise object lights won't work and you'll scratch your head for days until you figure it out.



06. Geometry Lights II

If you download the Brazil release 0.1.3 from SplutterFish, you'll get the tutorial files, which includes some geometry lights which is what you're looking for.

If you can't get to those, create a STANDARD material (don't use Brazil material, it will waste bounces you'll never see), set the diffuse colour to pure white, and set the self-illumination to 100 (make sure it's self-illumination, with the numerical slider). Then place an output map in your diffuse slot. Turn up the RGB amount to 5 or so, this will vary depending on your scene and the geometry you're using as your light. 5 is just a good starting point.

The amount of light emitted by the object is mostly dependent upon how large the object is. Increasing the output value will also work, but not to the same degree as increasing the size of your object.



07. Composited Glass

Well, If you are working with video you will need a video compositing app like After Effects, Combustion or Digital Fusion. If it's only a single frame, yes you can do it with Photoshop.

It's a little hard to explain in a forum post but the overall process involves making several rendering passes.

First you do your normal Brazil one with no glass materials. Render your scene normally, this would be your first pass.

After that, modify the scene as needed without affecting actual geometry or camera paths, for example, changing the lightning system and materials so that your glass casts a light transparent shadow and is illuminated by a regular Max lightning system. Make all the objects already shown in the brazil rendering into matte objects by applying Blur's matte material and setting them to not affect alpha so that they occlude your object but don't affect the overall alpha channel under them, this way they are not actually rendered but will effectively cut your object when it goes behind. You can also set them to show received shadows even if the object itself is not rendered. This would be your second pass, which should look like a piece (or pieces) of glass surrounded by black.

After that simply load up both passes in your program of choice and layer pass two on top of pass one, adjust transfer modes and color correct, and voila.

(NOTE: This trick indeed works nicely, but in that case the glass you composite in will be purely based on the environment... so it's a nice way to get fast, good-looking glass, but it isn't so accurate.)



08. Sunlight

Getting a scene to look like it's lit by sun, is no easy task. The best way to do this is to use only a directional light as the main light in the scene. Sometimes I have to crank the multiplier up to 3 or 3.5. Then use the skylight to fill out the rest of the scene. The tricky part is getting the ambient colour of the scene. Since the main light is going to be orange, you want the ambient colour to be a complimentary blue. Most likely a dark blue, depending on what the scene requires. You'll have to set the number of bounces greater than 1 cause the direct light you see coming from the sky is the first bounce, try bounces at 3-5 depending on room complexity.



09. Isolating a Bug

Here's a few general notes on how to isolate a bug once you've found one.

Every time I get a bug file, I go through a process to isolate exactly what is causing the bug. If you have a crashing brazil file, here are some steps you might want to go through to help figure out exactly what is crashing.

1) Start deleting objects one by one until the file no longer crashes. Many bugs occur because of a single bad object, or bad interaction between a group of objects. Remember to check for hidden and frozen objects. Once you've figured out which object(s) are responsible, try and delete everything not related to those objects, so you have a file with very minimal geometry.

2) Remove any 3rd party plugins. Any plugin you're using such as a modifier, a special atmospheric plugin, etc, remove from the scene. If it stops crashing, you know it was a plugin compatibility problem, and report which plugin was giving problems.

3) Remove all environmental and render effects, Remove background.

4) Assign a plain standard material to all remaining objects in your scene. If it stops crashing, it may be a material problem. If it does stop crashing, go back and start assigning standard to any object that you can while still retaining the crash.


5) Once you've simplified the number of materials, start simplifying the materials themselves, by slowly turning off reflections, refraction, removing bitmaps, etc.

6) Take your current saved file, and merge the objects from that file into a fresh copy of max. Assign the brazil renderer and render. If it doesn't crash, then it may have been your brazil settings. Open two copies of max, take your original file and start changing it's parameters until you have default values assigned to everything. Check for crashes while changing parameters to see if you can narrow down which parameter may be causing the crash, like, does it crash if all the ray depth controls are at default? The raytrace accelerator settings? If the skylight is turned off?

If your file still crashes and you've done all these things, then you now have a very clean file that I'd appreciate you uploading so we can look at it. If you managed to stop the crash, you probably know what area the crash has occurred it, and I'd still appreciate an uploaded file but with now more specific info on exactly what is crashing. When reporting the crash or bug, report it in simple steps, like:

1) Open max

2) open file

3) Press this button

4) Change that spinner

5) Render

6) Max crashes.

Also giving info on what copy of max you have, OS info, and which version of Brazil is very important.



10. Disappearing Faces

What's happening is that faces are falling between cells in the acceleration grid, and being ignored.

Any accelerated raytracer will fit its grid around your geometry, as closely as is practical. When you have a lot of evenly spaced surfaces that are all parallel to world axes (a perfectly flat ground, walls aligned perfectly to north, south, east and west), the acceleration grid can fit so tightly around the objects in your scene that when it comes to checking for ray intersections with individual faces, some polygons can end up lying exactly on the border between grid cells - not actually within any cell. Being in this limbo outside the acceleration grid doesn't just make those faces slower to render - it makes them invisible. This is an issue all raytracers have to deal with, and I'm sure this kink will be thoroughly ironed out of Brazil well before it reaches 1.0.

Disrupting the overall squareness of your scene might help (perhaps by binding everything to a very slight Noise space warp), and creative grouping is recommended.

21-08-2003 22:14
boss_3d
11. Video Info

I work in the 3D broadcast television commercial industry so I can answer that question for you. NTSC which is the standard for North America (National Television Standards Committee), runs at 30 fps. (29.997 to be precise which matters for audio synching over longer pieces of footage). PAL, the European standard runs at 25 fps. Film, as we all know, runs at 24 fps.

Which medium you are outputting for is obviously integral to know BEFORE starting a job.

When rendering for this standard, we work at either 720x486, or 720x540, depending on the software being used for outputting. Some systems, prefer using a 540 pixel height image and then squish it down to 486 (or 480) to compensate for the 1.333 NTSC aspect ratio. Aspect ratio refers to the ratio of width to height of an image. Since computer monitors use square pixels and NTSC monitors use rectangular pixels, your image will look stretched when output, unless you render it out at 1.333 aspect ratio to over-compensate for the stretch. To see what I mean, render out a circle at an aspect ratio of 1 and dump it on a television monitor - it will look like an ellipse.

NTSC runs at approx. 30 fps, 60 fields per second. A field refers to a frame made up of two images. This is done by using the first horizontal line of the first image, then the second horizontal line of the second image, then the third horizontal line of the first image, then the fourth horizontal line of the second image, etc. Although the frames will look weird by themselves, when played back on NTSC it looks fine because of the way a television updates it's image. Fielding is used for several things, in several instances. If your animation looks 'stroby' - i.e.. it strobes because the distance of your subject from one frame to another is too great and motion blur doesn't fix it, then use fields. It will greatly help smooth out your animation. You will also end up using fields, particularly a function called 3:2 pulldown when you expand film to video. This is because you have to make 30 frames out of 24 so it grabs some and makes fields out of them while keeping others whole. To a trained eye, you can tell when footage has been introduced to 3:2 pulldown.

Before you output, you should also run an illegal color pass. This is pretty much a built in filter into any editing/compositing software, even After Effects. Since television monitor have very poor color handling, bright colors tend to bleed on the screen.

The illegal color checker will clamp any color which is brighter than the recommended maximum luminance value.

I'm not sure about the details of DVD or HDTV. I know the HDTV had several different resolutions in debate and I think they were something like 1280x720 but I'm not sure.



12. I.O.R. Info

Basically, when incoming light hits a surface, some of it can be reflected in a diffuse way, and some in a focused way - standard raytraced kind of reflections. This is a pretty gross simplification, but it's enough for now...

Problems with physical accuracy come from the fact that a lot of renderers if they have a surface that is reflective do this:

100% incoming light, 100% of that reflected as diffuse, 50% reflected as focused reflections.

Basically, they just add the reflections on, but this is wrong because it's made out of the same incoming light - there is only 100% light to start with. That's what all the physical inaccuracy stuff is about.

A good material shading calculation is smarter. It says "hey, I'm reflecting 50% of my light out as a sharp reflection, so only 50% of my light gets used to illuminate diffuse"... so this doesn't have unrealistically bright looking surface where reflections are happening and all those other nasty problems

Fresnel reflection isn't tied very closely to you basic conservation of energy ( ^^^ that stuff. EDIT: Actually, it is. As the next bit says. But basically while Fresnel reflection is derived from conservation of energy and stuff, you can do conservation of energy for your material reflections without using fresnel).

When light hits a surface, the amount that bounces off is based on a whole lot of funky physics... according to Andrew Glassner (in volume 2 of Principles of Digital Image Synthesis): "Fresnel's formulas may be derived by writing down Maxwell's equations at a surface boundary, and making sure that energy and continuity constraints are satisfiedafter reflection and refraction.".

But the basic thing with Fresnel reflections is that how much things reflect is based on this stuff, and it varies with angle, usually stronger at glancing angles... and this funky physics is the way reflections behave in the real world, so it's important to use Fresnel reflection for things like glass materials to make them look funkily realistic.

The good news is that it's pretty simple to use it - in MAX you can just chuck a falloff map on a Raytrace material with the type set to Fresnel, set the IOR, set the material's IOR, make the diffuse black, make the transparent colour almost or exactly white, and you're in business. The Raytrace material is very cool, because it has good physics - all that conservation of energy stuff happens automatically, and things just look right. Of course the standard material doesn't do that, but since the Raytrace material is so much better in its shading equation, even for environment mapped stuff it's often a good idea to use the Raytrace material with the raytracing checkbox off. It's only about 5 percent or so slower than the standard material and it's cool.

So in the real world and a good rendering system, you just do Fresnel in your reflections and everything works. But the MAX Scanline system has *bad physics*.

So often people put Fresnel in their diffuse, so they *manually* make the shading more realistic - they use their reflection intensity maps (not environment, reflection intensity - environment is what you see, reflection intensity is how strongly you see it. So some lake or something might be your environment, while fresnel is your reflection intensity) to make their diffuse blacker... so you don't get as bad light physics:



Light coming in: 100%



Light in reflection: 25%



Light dopey material calculation stupidly thinks is coming out in the diffuse: 100%



Dopey diffuse darkened according to reflection intensity: 75% (essentially, multiply by 1-reflect amount. This is what you do if you put a fresnel map, with your diffuse in the black slot and black in the white slot).

So you now have 25%+75%=100% - good light physics instead of 25%+100%=125% - pulling energy out of nowhere. But it's annoying, slows your rendering down (extra maps and stuff), and should be done automatically (a la raytrace material).

Right now, you need to do this kludgy material trick for realistic VRay renderings, because it's piggybacked on the standard material. The good news is that RSN there should be a VRay material that actually has good light physics, so you can leave your diffuse alone and still get decent physics...

"My question here, is; is the IOR not the same as the fresnel falloff for the reflection? (discounting transparent objects here) How do you set the fresnel falloff for diffuse and spec/refl to work right with the IOR value if they are to be used together? Also, how does the mix curve for the falloff play into this? Normally it is a linear value from black to white, but should this be modified to relate somehow to the type of surface I am trying to make? Am I really off on all this stuff?"

Well, basically if you set the IOR of your falloff map(s), if you're doing the crappy physics workaround, where you should use a fresnel to boost your specular at more glancing angles as well if you want to use specular in your materials - often it's better to not use specular and put in a bit of geometry for your lights) to the right value, you should get funky response - the IOR handles making the curve right, so you don't need to screw around with the output curves or anything...

Actually, you're pretty much right. Your ideas on how to use fresnel stuff to get better physics are right, but you should keep in mind that they are kludges to get around bad shading models, and ideally that stuff happens automatically. For VRay renders you have to stick with crappy kludges, because it doesn't yet have a proper material, but in MAX scanline use Raytrace materials with the raytrace checkbox turned off - it's not only good for raytracing.

Basically IOR and Fresnel are closely related, but not the same. It all starts with weird physics stuff like the conductance of materials. That in turn determines things like the angles refractions happen at, and the intensity of reflections - IOR is, strictly speaking, really just about refracted ray angles, but since it's closely tied to conductance and stuff, and thus reflection amount, you can sensibly control your reflections using IOR, even if you have a solid surface, IOR can be used, because it can be used in calculations as a general measure of material physics...

Also note that generally for shiny metal and glass you should use total black in your diffuse slot, because their appearance comes from their reflections - even if you have blue glass or gold metal, there isn't really any diffuse, it just tints reflections/refractions that colour.



13. Motion Blur Trick

1. Render your animation with Brazil, then (just something I do to avoid cockups) make a copy of your scene.

2. Stick a matte/shadow material on your objects, apply the appropriate image moblur to the objects. Now switch to the scanline renderer and in render settings enable image moblur.

3. Grab that animation you did before and stick it in you environment slot. It should fit. Set it to screen projection.

4. Very carefully, push the render button.

Each frame will render then do the moblur pass. What happens is the matte/shadow object does its moblur thing and because it shows the background through it. The result is your background frame with moblur on it.



14. Photon Mapping Info

Well, using photon maps instead of the current system to accelerate GI would speed it up, but doesn't actually transform it to make it easier to do progressive display. While photon maps do shoot light around the scene, proper implementations of them use them only to accelerate your renders - you can't see light "splashing" around the scene.

Basically, the way Brazil computes GI is like throwing tennis balls at a target named "accuracy". Currently, it throws them in all directions, but if you add photon maps into the mix, it can chuck them more in the right general direction - though that's an analogy, not the algorithm, quite. What really happens is that it sends samples where they matter more, based on your photon maps. Anyway. It still computes GI on a point-by-point basis.

What you *can* do is abstract your GI - Brazil already does this to some degree, taking GI sampling out of lockstep with image/material sampling, so you can undersample etc to get faster results... extending that abstraction lets you render GI in a completely separate pass, so you render your first pass, which computes reflections/refractions, standard lights etc, and works out how much GI contributes to each point, and where the GI is coming from, then renders a pass that computes the GI intensity. That way, you can see your standard lights, then a sweep of skylight/bounce light coming into your pic. Also, what would be possible is if you are taking, say 60 GI samples per pixel, it is possible to take the first 10, then display the results of that, then take another 10, then another, until you hit 60. That way, your scene gets progressively less blotchy/grainy. That kind of system really only works well if you have enough abstraction between your GI computations and your other surface/lighting stuff that recomputing your GI stuff doesn't force all the other stuff to be done again - if you are working with a full image, and doing your sampling/antialiasing *properly* you basically have to store every sample you take, which can be extremely storage expensive... 3x3 pixel sampling, which is kind of sucky, requires 9 times the framebuffer space. Though actually if your GI Max sampling is 0, you only need to add another channel with sample point info and some intensity tracking - note that complex reflection/refraction scenarios can mean 8 points are contributing to a pixel, and each with diff. intensities and stuff. Though you can do it on a bucket basis too, so you don't need any more storage really, and you could see the GI on each bucket refining gradually... that would be most cool. But of course, it really doesn't do anything much apart from making renderers look cool, and add some degree of interactivity (oh, you can also render your GI adaptive sampling in successive passes too, which is kind of useful), and slow stuff down (any system like this adds a fair bit of complication and recomputation without adding any rendering optimizations or stuff, apart from more effective test renders (which *are* very cool).

21-08-2003 22:14
boss_3d
15. Making Grass

1.Draw 2 simple blades of grass using splines (all vertices are corners so that the poly count stays really low).

2.Collapse the blades to Editable Poly objects.

3.Give each blade a material with a different shade of green.

4.Then for each object that is going to have grass growing on it, duplicate the blades that number of times.

5.Then make each blade of grass a Scatter Object (Create>Compound Objects>Scatter) and choose the different 'distribution objects'.

Then you basically just play with the parameters until it looks nice!



16. Rendering Glass

The Brazil material is *very* realistic, correctly used.

If you are trying to simulate glass, make your diffuse color black, turn specular off (it looks disgusting and wrong on glass) your reflect amount falloff/fresnel with IOR of 1.6 (NOTE: Don't use the checkbox, use the map slot. The checkbox is buggy with bumpmaps etc.), set your object's IOR to 1.6, then check double sided and hit your transparency way up - to either pure white for high grade glass, or very very pale grey for a slightly light dimming glass. When you have done all this, Brazil basically calculates everything that is going on in a totally realistic way - that is a bit of a generalization, because of lack of spectral sampling techniques etc (which aren't much good anyway) - but you can take a photo of say a wineglass, then build a CG wineglass and surround it with the original environment (windows etc reconstructed), and it looks exactly the same – barring impurities in the glass's surface smoothness and so forth - with all the funky looking glassy refractions of the original.

Oh, and for tinted glass, tint the transparent colour and the falloff map's white colour to the colour you want...

Most of the appearance of realistic glass, once you have the basic material set up, comes from what is reflected and refracted in it - the environment you put your objects in. So chuck in some well positioned lightcards and play with objects and so forth to make it look interesting.

Check http://www.neilblevins.com - it has many excellent tutorials on a variety of things, including glass.

Since it has been a while since I did any glass tests, I cooked up a few more - just chucked together a spline, and lathed it. Looks kind of like a glass alien condom, or genetically modified mushroom, but never mind. It has cheesy postprocessed specular bloom, done with a reduced intensity image then boosted, so it's quasi H.D.R.I. blur but not really.

I'm doing some more test renders with this glass thing - expect H.D.R.I. and maybe some tinted glass. It has been way too long since I did any of them, and it's so much *fun* to render cheesy glass pics.

Those "rings" you mentioned are essentially just features of the environment being refracted again and again and again under certain conditions - properly rendered glass has it too, and it gives a striping effect, and the sort of thing you're talking about is visible here.



17. H.D.R.I. Info I

Ok, well, first comes the explanation of H.D.R.I., blur, blowout, and how they all work together to create funky looking stuff goes...because relatively speaking it's fairly simple.

In the real world, you get very bright lights. Very bright. Like, e.g. the sun. If computer monitors could display that, with true full intensity, then you would be blinded by it. So instead they just display an image which has the same general kind of colour, but is clamped (limited) to the maximum intensity your display can show. This, you no doubt know already, but it is important to have it it mind for the next bit.

Ok, now the bit about why people like blur filters. The reason blur filters are good for getting a hazier and/or more realistic appearing image (depending on the settings you use) is because in real life, things like minute imperfections in camera/eye lenses, atmospheric scattering, slight defocusing, scattering in the weird jelly stuff in eyeballs, etc, etc, light coming from a single source point actually gets projected kind of fuzzed around a point instead of exactly focused on it - most of the light from some random thing *does* go in a straight clear line, which is why it don't look blurry, but some stuff gets perturbed a bit off.

Essentially the end result is like adding say 10% of the intensity of the image, but blurred out. That simulates the effects seen in both cameras and direct perception very nicely as far as the human visual system is concerned, and the spreading also helps eliminate hard edged cg/outer space looks.

Ok, so now let's combine them. If we have a random object that is super intense, lets say the sun, then we clamp it to the max intensity of a computer display (I'm using 1 as fully intense, 0 as black in these examples), then we blur it and add 10% of it's intensity (note 10% is an ad hoc figure, the sort of effects in reality/artistically useful varies depends on circumstances), then at the hard edges of the sun, we have a 10% intense "halo" from our blur effect, that drops off to zero. But lets say the sun is being unrealistic and obtuse, and is only 5 times the max intensity of computer displays. Then, if we say "lets keep track of how intense things *really* are instead of working with CGland values which have little to do with reality", we start with an intensity of 5, then we blur that, and add 10%. Now because we started with something 5 times brighter (because it wasn't clamped), our end result is actually 50%, not 10%, and *then* we clamp for computer display - so we have a core that is pure white still, but an edge that is *5 times brighter”. This is *very* important. Because we tracked our intensities properly all the way until our display forced us to clamp them so it could show it, the end result is much more consistent and visually realistic. Another very useful side effect is that such effects enable the difference between something that is 5 times max intensity and something that is 2 times max intensity to be seen in their halos. It adds an extra perception of brightness to objects that are extra bright, that is lacking without it. Also, very very bright objects have a corona that stretches at bright white right to almost the edge of the blur zone, and then has a relatively narrow transition - so you can even see the difference between 10x and 100x standard displays.

Now the standard max blur effect has a thing that selects pixels to blur based on luminosity. This is a stupid hack made necessary by lack of high dynamic range. Essentially it says "well, I don't have the faintest idea how bright these things really, are, but because I want them to look all nice and bright and stuff, I'll act like they're a whole lot brighter than everything else".

The fact is, all the processes like atmospheric scattering etc that make blurring happen don't care if something is brighter than a certain threshold (this sort of thing can happen in extreme cases because of quantum effects and weird situations, but in general is not true) - an image gives a result identical to something 10 times brighter, then divided by ten to be the same sort of intensity.

So:

image, chuck through reality, end result is the same as:

image x10, chuck through reality, divide by 10, end result.

But that principle is violated by saying "I will behave differently above value x". The reason white type stuff is extra strong is because it's extra bright, not because it has extra blurring properties.

Ok, so that's the rationale, implementation and implications of HDR and non HDR blur. Otherwise known as the "Non HDR blur, like, um, totally, uhh, sucks, dude rant". Oh god, I said that would be relatively simple. I am hoping like hell I was wrong

Now, how blurring interacts with antialising has little to do with H.D.R.I., apart from those cases where things are unclamped etc. The reason why doing blurs with subpixel access is not very useful is actually fairly simple - I just explained it in a piece of writing notable only for it's turgidness, indirectness and general incomprehensibility. And Pi reminiscence apparently. I haven't actually seen it - I should sometime I guess. Though I vaguely know what it's about.

Anyway, here is the non "anal-retentive terminology freak" rendition:

The whole point of a blur is that it kills all the fast changes in your picture. Thus, if you supersample your blur process, all that happens is you produce more of those fast changes, which are then subsequently totally chucked out. Therefore, you don't get any extra quality from adding these fast changes, because they get killed, and contribute essentially nothing to the end result.

Now, I will qualify that with the statement that that is an outrageous simplification, says a few things that are practically lies from a technical viewpoint, there are some significant exceptions and in general has only a moderate correspondence with the reality.

But hopefully it will communicate the general idea.

For example, if you feed a blurring procedure an unantialiased image, then any big problems with that image, like severe interference patterns, will show up in your blur. So you got to feed it a nice, well filtered image. Well actually, mostly the blur component is practically exactly the damn same anyway even if you feed it an aliased image - just that when you have nasty aliasing artifacts they'll be blurred too. And high contrast stuff (very, like 1000x full pixel intensity) scenarios are where antialiasing is most important - without it, you may well get an end result that is quite nasty. But if you antialias it, you would probably get essentially the same result as with a subsampled accurate blur, even with such extreme situations - sorry to contradict my earlier post, but I hadn't thught about it that much at that stage.

Basically, unless you do fucked up things to your image, like million to one contrasts that you never use except in tests, you will *never* have lack/presence of subpixel blurs noticeably either visually or mathematically affect the end result, as long as you have good filtering.

So don't lose any sleep over it - your artwork will always have brilliant quality blurs without freaky supersampling, as long as the blur effect itself isn't written braindeadly.

Oh and just to be a hopeless nitpick, while "reconstruction filter" is a term often used for antialiasing/sampling filters, and the step in your sampling pipeline when you apply as filter to your sample data is called "reconstruction", usually filtering is the term used, reconstruct is not used much as a synonym for filter - so I would say "makes no odds whether you blur it, then filter it, or filter it, then blur it". Note that while that part of your statement was true, the reason it works nicely without artifacting is because the filter step throws nothing away that really affects the blur much, and the very gradual nature of the blur means that it doesn't do anything that really affects the filtering much - since filters are generally a local weighting operation, and your blur is much the same in any one region, so doesn't have those filter response alterations. HDR is just the icing on the cake that makes sure that extra bright pixels etc have nice looking blur as explained above, and doesn't affect how filtering and blur interact much...

Also, I should explain that while reconstruction filters, antialiasing filters, and image filters refer to much the same thing, and reconstruction, antialiasing, and filtering refer to much the same thing, there are some subtle distinctions in what they mean and where they are applied... sorry I was a little unclear on this before

Antialiasing, technically, is the process of killing nasty artifacts that arise if you take single discrete samples of continuous things that interfere with your sample grid structure, e.g. nasty jagged edges, moire patterns (basically what happens if you sample something with high frequencies on a square grid), etc. This antialiasing is usually done by filtering supersampled data.

Reconstruction, technically, is the process of taking a bunch of samples, and working out a, well, call it an "interpretation" of that samples. Reconstruction uses a reconstruction filter, which says how much different places on the input contribute to the "interpretation". Some filters for example have edge enhancement stuff - essentially they take an input signal, then "interpret" it in such a way that it has stronger local contrasts. Note that "interpret" is a kind of analogy for how reconstruction signals alter the original signal to produce a reconstructed signal (called reconstruction because generally the point of it is to take an input set of discrete samples and infer some kind of continuous signal from them), and is not a technical term... so if you start talking about a reconstruction filter's interpretations, probably no one will know what you are talking about.

Filtering is a general kind of term for running a bunch of samples through a filter so that they are again shifted in a way considered desirable. Filtering is really very similar to reconstruction, they use the same *process* but filtering is more centered around making something that has nice response properties - like lack of edge enhancement, edge enhancement, frequency control etc, than reconstructing continuous signals. So really, filtering and reconstruction use the same process but have different aims. Usually CG software combines filtering and reconstruction, but they can be done separately. In a CG sense, filtering type goals are generally the more important role of the filter rather than purely attempting to reconstruct stuff (which is what e.g. is going on when you scale an image up and it looks all blurry when it's bigger - the way each original dot is interpolated is defined by a reconstruction filter, though usually primitive ones are used), so it's called filtering mostly, but filtering and reconstruction are, as I said essentially the same, but with subtle differences, generally with filtering being a broader definition.

So what happens in something like Brazil is that in order to antialias, supersampling and filtering techniques are applied. The filtering serves to reconstruct a final image, which has varying nice properties depending on the filter used (or really crappy ones if you use something brain damaged like a box filter .

21-08-2003 22:15
boss_3d
18. H.D.R.I. Info II

Well, when you scale your image intensity, you do create an end result image with high dynamic range. But there are 2 problems to this:

1) Is that if you take a standard normal image, then scale it by 2, say, you don't get any extra stuff out of it - it's the same damn image data, and if it is a picture of something that got bleached out in the image taking process, the detail there won't reappear (where in a properly created HDR image it will be there)

2) Is that if I have an image scaled to 10 times the original brightness, instead of things going up in intensity steps of 1 unit, they go up in steps of 10 units. Now on a standard display, with spread out gradients, stepping artifacts are often visible. If you multiply that by 10... essentially you loose a lot of fine precision in intensity, and tend to get either stair stepping, or noise, or both turning up - it doesn't look good. So one of the other things H.D.R.I. does is store not just more range, but more accuracy.

So basically you have to take your original picture with high dynamic range or you loose stuff that you need and/or you have lots of problems with crummy looking images because you get crappy intensity precision - they're the main reasons why H.D.R.I. from scaled standard images can be problematic. Having said that, it can look very good, particularly if you prepare an image that is underexposed and contains all the range detail you need, and you are not scaling it too much (more than maybe 3 or 4 times)... otherwise, true H.D.R.I. is best.

Also, by playing around with output curves it is possible to create an image where the standard tones are much the same, but bright patches (like a skylight, say) that are bleached out get extra intensity...

When you scale down intensity, you don't have a high dynamic range image any more. When you clamp intensity, you don't have a high dynamic range image any more either... but there is an important distinction. Scaling intensity takes an image where intensity varies from say 0 to 10 and divides all your intensities by 10. Clamping an image takes an image from 0 to 10 and makes everything that's more intense than 1 equal to 1 - so the first makes everything dimmer but preserves bright details and the second makes stuff mostly the same intensity, but forces extra bright stuff to your maximum intensity... which is usually preferable for computer display. The real power of H.D.R.I. is for doing things like reflections and DOF properly - there, e.g. if you have a sphere dimly reflecting stuff around it, then a render of that sphere with a standard dynamic range environment with some bright lights in it will look pretty dimly reflected, but a H.D.R.I. environment while being very subtly reflected in the moderately bright parts as before, the bright lights will look right... and if you look at reflective stuff around the place, most of it dims what it reflects quite a lot, so using high original intensities when they should be makes it a lot more realistic, even if the end result is clamped...



19. Speed Tips II

1) Group all the object except the plane

2) Set contrast in GI to white (NOTE: This basically says "ignore any fluctuations in intensity at all". What that means is that your GI min rate will always be used, never the max one, so it's the same as saying -3, -3, if you use, say, -3, 0. So it's damn fast, but blurry. Often not so good. The default setting is better at balancing clarity and speed, usually only increasing sampling rate when necessary for a good pic - so certainly, play around with it, and use whatever is fast but looks good, but the default is pretty good, and extremes like a white colour will tend to screw it up.)

3) Set acceleration to Manual Hybrid

4) GI sample rate down to 25 or 20 (60 is probably useless) (NOTE: That will tend to give you minimally blotchy outdoor scenes, but flickering in animations. And it's pretty bad for indoor renders. Rule of thumb is, use what works and looks good, but as low as possible, and individual scenes vary a lot - interiors may need over 60 to look good, subtle skylight illumination in a still may need 15, the same in an animation may need 24.)

5) For reflection try self-illuminated planes (see Neil Blevins site)

6) GI -1/0 max is more than enough

7) AA 0/3 could be ok but try 1/2 as alternative.

I normally use this setup and grain are quite ok, and my rendertime for .5 million poly scenes are within 30-40 minutes. I noticed that grouping, acceleration and contrast are heavily involved in render time. Hope this can help you a little.

One of the things that is really hurting your render times is a lack of undersampling - and that won't be much helped by making your min GI -2 (which is what I suggest - try that, it will make things faster)... because Brazil doesn't yet undersample reflected GI etc - hence pain. Since GI isn't yet undersampled, heavy AA settings will make things a hell of a lot slower-sampling with 0, 3, and 60 GI sampling means that in edge type situations you will find you get, say, 3840 GI samples get thrown out. This mucho hurts, and is way overkill - so try doing it with no antialiasing, but larger - then resize in something that actually does it with proper filtering, like ImageMagick (command line image tool, free - clunky to use but awesome quality resizing with your choice of about 20 filters). Alternatively, if you use a constant sampling rate, your GI sampling amount is *way* less inconsistent, and because your GI gets supersampled, you can turn the sample rate down and still get high quality - so try this:

I want constant sampling at 3x3 samples per pixel (pretty high quality without taking too long). This will give me 9 samples for every pixel. Per pixel, I want 60 GI samples, so I will do a little division: 60/9=about 7 GI samples per pixel. Now I set my GI rate to 7, and I go to my render settings and set my AA to 2,2, then I go and hit the checkbox that says to lock GI sampling rate to image rate (this means that diffuse stuff, which is *not* locked to image sampling by default doesn't look grainy). Now, voila! Good looking, relatively fast GI. If it looks a little grainy (which it may well do), try bumping up your GI rate to maybe 12 or something. But go up in small steps. Your no. 1 time killer has got to be that heavy oversampling on the GI. Also, evaluating materials a lot can be slow, so using AA of 1,1 which is quite good, but fairly fast, and GI for equivalent rates can be good too.

Large amounts of bounces are always a recipe for renderpain too - maybe it will look fine with just 1.

Oh, and make sure you use a CSG plane for your ground, not a polygon one. They are a whole lot faster to calculate than poly planes. Oh damn, it's reflective, and CSG planes don't do custom materials yet. Bang goes that bright idea . But it will help a lot if you make your ground plane as small as possible and group it separately to your other stuff.

Also, as I think the Quasi-Monte-Carlo sampler can't handle multiple image samples effectively at this stage, you may find you need a few extra GI samples for the same GI quality when you boost your sample density - I would recommend something like 2x2 sampling (min 1, max 1) and a sample rate of 30-40 for 120-160 GI samples per pixel (you need that kind of power because otherwise you find that the redundancy you get from pixel supersampling in your GI means your noise is a little slower to reduce.

Also, with a shiny floor and car, you will probably find a lot of inter-reflection going on. Now, every time you inter-reflect, your GI costs go up. Since your floor is not that shiny, extra reflections probably do you little good. So try turning your reflections down to 1 or two (this is *bounces*, which should result in a total of 2 or 3 actual reflections going on - if a reflection bounces off 2 things on the way to its destination, it appears on thing one, thing 2, and destination). 1 bounce (2 iterations of reflection, so you can see the lightcards on the car on the floor, and the lightcards on the floor on the car) will probably look great... and work much faster.

One thing you should look out for: Brazil's GI sampling currently can alias quite badly on low sampling rates, so you may be best off going for something like no AA, higher GI. Essentially, the settings here make Brazil work a lot like the old Ghost did, which is better for heavy reflection stuff, because you mostly lose undersampling anyway, and it gives you more control - you no longer have inconsistent GI sampling rates, which can make some noise go away fast but take years to render while other stuff is noisy as hell but rendering fast. Often, getting rid of the last small amount of grain is pretty hard, so you may want to either 1) Think of it as funky automatic film grain, or 2) Get it pretty low, then use a good noise removal filter (that'll kill the noise without loosing detail)



20. Photon Mapping, H.D.R.I., etc.

Q: "What is photon mapping?"
A: Photon mapping is a technique for accelerating Global Illumination (GI). Basically, it shoots a bunch of bits of light out of your light sources, and uses that to get a general idea of where light is going in your scene, and then the renderer can use that data to calculate its GI faster and more cleanly, because the photon map essentially gives it a rough guide as to where it should be looking instead of it having to look as hard all over the place, which is inefficient.

Q: "Is it the same as H.D.R.I. rendering?"
A: No. Photon mapping accelerates GI, H.D.R.I. keeps track of intensity properly (pixels that are brighter than your display can show, and smoother detail in intensity). While you can (and should) keep track of intensity properly in your GI, they are no more related than specular highlights and volumetrics, say.

Q: "Does it work with the public release version?"
A: Depends which one you mean. H.D.R.I. is totally in the public version, possibly minus H.D.R.I. file saving (I haven't actually needed to try this yet, so I don't know if it saves it or not). But anyway, Brazil, the public version, does H.D.R.I.. Photon maps are as of right now purely a part of closed testing, but soon there should be a 0.3.x public alpha release with lots of goodies, including those photon maps.

Q: "How do you use it?"
A: If this refers to photon maps, the current answer is: You don't. You can't. Public Brazil doesn't do this yet. When Brazil gets it, you will need to learn how to use the photon mapping settings. Photon mapping is a fairly general algorithm, and the specifics of how it is implemented and how to tweak it can vary a great deal between implementation. If you are referring to H.D.R.I., all you have to do is use the H.D.R.I. image loader/set intensities to greater than RGB plain white through RGB intensity multipliers or high specular cranking, and Brazil will do the H.D.R.I. in GI, DOF, reflections etc automatically.



21. Thin glass approximation

Since thin glass panes refract twice and cancel out the direction of rays to the original one, which is like the situation if you create a thin box and put a glass material on it, you can get a very close approximation if you use a plane instead of a box, and use a material that doesn't bend light rays coming in, but only adds the fresnel reflection component.

All you need to do is take a brazil test material, set diffuse to black, set specular to 0, set 2 sided to on, put a falloff map in your reflect slot and set it to fresnel with IOR of 1.6 or thereabouts, then set your global IOR (the one in the material, not the map) to 1.0, which basically says not to alter ray angle. Then, dump that onto a plane for your window glass, render, and voila! you get fast thin glass. It doesn't really have any thickness, but looks almost exactly like thin glass in most cases (like if it is in the windows of a building or something).

Actually, you could get an *incredibly* close match by using some complicated map mixing to take into account the fact that you get more than one fresnel reflection going on when you have light going through a pane of glass (there are two surface boundaries), but really, additional reflection bounces are a lot less intense than the main one, make calculations more complicated, and don't really alter the subjective appearance much, so why bother?

21-08-2003 22:15
boss_3d
продолжение следует..

Закрыть