Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix for FS324, regression on triangle #358

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

LeForgeron
Copy link
Contributor

in mesh, mesh2 or alone, the 3.6 version was fine and got broken

As reported in #121 (scenes are from that issue)

Before (i.e. in 3.7):
a37
b37

After:
a
b

in mesh, mesh2 or alone, the 3.6 version was fine and got broken
@c-lipka
Copy link
Contributor

c-lipka commented May 27, 2018

Some questions/observations that make me doubt whether the change was made based on the whole picture, rather than just patching up symptoms:

  • Does it make sense to set MIN_ISECT_DEPTH to zero rather than just throwing it away altogether?
  • With these changes, does the subsurface scattering special handling still make sense?
  • How does this tie in with Bill's current work on the polynomial solvers, which affect precision?
  • What is MIN_ISECT_DEPTH even supposed to be doing, and why should it be safe to just set it to zero?

As long as I'm not sure whether the questions have been thoroughly examined in developing the changes, I'm reluctant to pull them.

@LeForgeron
Copy link
Contributor Author

  • 0.0 but the check is >, no more >= , and the test must remains to eliminate negative distance
  • subsurface is another subject, you are the one the more informed about it
  • do you have a testing scene ?
  • ask the change from SMALL_TOLERANCE between 3.6 and 3.7

@c-lipka
Copy link
Contributor

c-lipka commented May 27, 2018

  • 0.0 but the check is >, no more >= , and the test must remains to eliminate negative distance

While that sounds plausible at a superficial glance, looking at the v3.6.2 code I do not see any such test. So what makes you think it is necessary in v3.7? Is that just a judgement from symptoms? If not, please elaborate.

  • subsurface is another subject, you are the one the more informed about it

Yes, but if you touch code that relates to it, you should gather information about it yourself. Like, say, asking me what that test is all about anyway.

  • do you have a testing scene ?

For what? Bill's work on polynomial solvers? Ask Bill - if it even makes sense to ask for a testing scene there. For subsurface light transport? Try scenes/subsurface/subsurface.pov for starters.

  • ask the change from SMALL_TOLERANCE between 3.6 and 3.7

That's not really an answer, is it.
Also, I'm not really interested in the answers per se. I'm interested in getting a glimpse as to how you found those answers, so that I'm confident you know not only if the change addresses the issues without breaking other stuff, but also why. Without the latter, there's no way of telling whether the former is just an illusion due to the choice of test cases.

@wfpokorny
Copy link
Contributor

wfpokorny commented May 27, 2018

(I'm a little rushed in writing this, please forgive mistakes)

The work I'm doing today isn't is loosely tangled in the proposed changes here with respect to the minimum returned intersection depth for all shapes / surfaces. I'm primarily aiming for more accurate roots at this point. What happens with those later is, well, later.

I am a little concerned taking the depth stack intersection to intersection down to >0 for reasons of media / interiors - and not due how things are today, but how things perhaps should be. Ahead of my diving into solver and root issues, I was looking at media and media artefacts.

There are maybe 6-7 reasons for media (they are often really interior issues) artefacts. A couple apply to this proposed change.

One is the bright media artefact where we pick up the the first surface going into a shape but then the second surface for the ray is < MIN_ISECT_DEPTH or sometimes < SMALL_TOLERANCE (AKA SHADOW_TOLERANCE) near and due the intersection depth stack we skip it leading to an interior interval / media interval that is far too large because the media interval stops on whatever surface the ray hits which is far enough away from the initial one to be added to the depth stack and sometimes this surface is quite far away. The 0 depth stack value might help here where we 'return' roots for the second surface. Yes, this an argument for taking the intersection depth in all shapes / surfaces down to something smaller.

The second issue is dark media artefacts / speckles and here one of the things which is happening is that when a ray is initialized there are two places in each of the the two bounding mechanism where we do an inside test before associating an objects interior with the ray. For example in tracepixel.cpp and
...TracePixel::InitRayContainerStateTree(Ray& ray, BBOX_TREE *node) we have this code:

    if(node->Entries == 0)
    {
        /* This is a leaf so test contained object. */
        ObjectPtr object = ObjectPtr(node->Node);
        if((object->interior != NULL) && object->Inside(ray.Origin, threadData))
            containingInteriors.push_back(object->interior.get());
    }

The problem is this case is numerical and happens when the coordinates for the ray origin (often from a previous intersection) and in theory on the surface do not test inside true. I have somewhere a test case where this happens with a simple sphere containing media. In these cases the problem is we don't associate the interior of the object with the ray when initializing the ray. When that happens everything interior related that follows is corrupted.

Get to the point eh... Well the 'fix' I have in my head is to do instead of an inside test at the ray origin during initialization, we do it at ray.origin+(MIN_ISECT_DIST? or SMALL_TOLERANCE?)+epsilon. I 'think' this will basically prevent ray - interior associations during ray initialization in a way which fixes both bright and dark types of depth stack media/interior artefacts. With a benefit we don't do interior / media work on extremely short intervals.

So basically - I'm worried some about taking MIN_ISECT all they way to 0.0 because it might defeat a fix I haven't coded and tested! :-) I say all this not completely understanding how MIN_ISECT vs SMALL_TOLERANCE is getting used with respect to the depth stack and bounding. Also of late believing we should be working toward smaller depth / shape-depth values in general for other good reasons - like the subsurface feature.

I too am fuzzy as to how we ended up with MIN_ISECT_DEPTH. Anyone know why it was created?

@c-lipka
Copy link
Contributor

c-lipka commented May 27, 2018

The problem is this case is numerical and happens when the coordinates for the ray origin (often from a previous intersection) and in theory on the surface do not test inside true.

Are you saying we're doing an inside test for each and every secondary ray? Shouldn't we be aware already what interiors we're currently inside?

@wfpokorny
Copy link
Contributor

wfpokorny commented May 27, 2018

Are you saying we're doing an inside test for each and every secondary ray?

Good question.

....Not sure if the test is being done always. It isn’t' if the associated object has no interior. Opaqueness must be accounted for too. Anyway. Don't know. It's now a couple months at least since I ran down this particular media artefact. Ended up going after solver related ones first as probably more important overall.

We'd only need to do the test I think on ray origins on entering or perhaps - depending on how nested interiors / transparent shapes are originally handled for the original (not continued) ray - leaving a new shape surface with an interior.

Rambling... At a high level I wondered why the inside test at all on seeing the code. If the object has an interior defined perhaps that would be enough if the parser and set is doing its job. Perhaps the test added at some point as a sanity test the surface is of a type with an actual inside? Given we have it, thought it might be just the mechanism - with some adjustment on the location along the ray tested - to eliminate a class of interior artefacts.

Shouldn't we be aware already what interiors we're currently inside?

In the simplest case yes. If nested interiors as I've said elsewhere, I'm not sure what happens during initialization. On exit of a shape with an interior we should drop the last interior added which as an operation would not need an interior test itself. If in the current interior or none and entering another shape with an interior we'd need to push the new interior onto the ray interior 'stack' so some inside test would be reasonable there.

Aside: (due my scattered thinking) Why does the disc have a defined interior? Documented as having one like a plane and implemented that way but thinking it shouldn't have one at all. Wondering a little about the person doing the foliage testing with discs and his 'different than he expected' result.

Edit:

I dug up a the test scene I mentioned. It isn't the original scene, but something contrived to exaggerate the interior to ray association problem found due intersection / inside test numerical differences. Uses an orthographic camera with the image plane intersecting a sphere with dense media only at the center. I hacked a quick fix at some point in the two places in the default bounding method using the ray direction to create +- EPSILON locations off the ray origin. Tested all to object->inside and made the association if any of the three test locations were inside. The hack cleaned up the attached test case.

rayInteriorAssociationIssue.pov.txt

@wfpokorny
Copy link
Contributor

Couple random thoughts as I was drinking my first cup of coffee.

  • The interior / media speckles patch I have in my head doesn't need to be tied to any existing 'SMALL_TOLERANCE/MIN_ISECT_DEPTH' value other than it needing to be EPSILON larger. Thinink my worry of conflict the the proposed changes there unfounded. We could just decide not to associate interiors with a ray origin if the interior doesn't extend in the ray direction some "distance."

  • On why the inside test exist at all today in the ray initialization, had the though it might help performance with media attenuation/shadow rays that miss the shapes backside due SMALL_TOLERANCE / MIN_ISECT_DIST or with blobs given existing 1e-2 internal depth on secondary rays. Suppose it would prevent an interior association such cases - pretty thin argument though. Tempted to try a compile and some renders without that object->inside test, but already drowning in solver stuff at the moment. Maybe when I get back to interior related media speckles.

@c-lipka
Copy link
Contributor

c-lipka commented May 28, 2018

Having just had a look at invocations of Inside, I see no evidence of insideness-tests when initializing secondary rays. Initialization of primary rays (both camera and photons) does require insideness-tests to determine which interiors are active on the ray's first leg, but all subsequent legs just "add" to or "subtract" from the list of active interiors based on intersection information.

This is what I expected; it wouldn't really make any sense to do insideness tests at intersections, because for the object just being entered or exited the insideness test would be inconclusive even at infinite precision.

@wfpokorny
Copy link
Contributor

Having just had a look at invocations of Inside, I see no evidence of insideness-tests when initializing secondary rays. Initialization of primary rays (both camera and photons) does require insideness-tests to determine which interiors are active on the ray's first leg, but all subsequent legs just "add" to or "subtract" from the list of active interiors based on intersection information.

This is what I expected; it wouldn't really make any sense to do insideness tests at intersections, because for the object just being entered or exited the insideness test would be inconclusive even at infinite precision.

Busy with real life for a day or two- thanks for digging. Might mean dropping that inside test is the better thing to do if doing something more complicated doesn't generally help the interior speckling. IIRC the dark pixel case that got me digging was not obviously a camera or photon, tracing back, back and back led into that ray initialization code. The test case scene using the camera plane was an easy way to exaggerate the problem. Entirely possible I fooled myself - I do it often enough. :-) Bummer though, cause it means what I found is a fringe case for all the dark media speckles we see and not the main cause...

@wfpokorny
Copy link
Contributor

Though I'm buried elsewhere at the moment, I'll try and run the suggested changes here at some point soon-ish as part of one of my many hundreds of scenes for root testing. See what changes, breaks if anything...

I will as part of this need to decouple my recently coupled MIN_ISECT_DEPTH blob.cpp intersection depth update... My recent thinking wasn't to take MIN_ISECT_DEPTH to 0.0 as, @LeForgeron, you are trying here. Rather to take MIN_ISECT_DEPTH (would be used in all shapes / surfaces as the returned min intersection depth too) down to maybe 1e-8 given EPSILON is today at (a probably too large) 1e-10, then SMALL_TOLERANCE (AKA SHADOWN_TOLERANCE) down to maybe 1e-6. All part of better centering and broadening the practical numerical space about 0.0 over the asymmetric (1e-3 to 1e7) situation we have today.

@wfpokorny
Copy link
Contributor

FYI. Expect I'll be pulling this branch into a local build for testing against my solver test scenes next week. Re-basing all my branches to the current master to set up for it took longer than I expected and I'm out of free time this week.

Aside: I searched all issues on the old flyspray bug system for small_tolerance and min_isect_dist hoping to turn up a test case or discussion related to the creation of MIN_ISECT_DIST, but found only this #121 and another issue ported here in #125 which should be looked at alongside the aim of this pull.

wfpokorny added a commit to wfpokorny/povray that referenced this pull request Jun 20, 2018
Near term need to use a value not MIN_ISECT_DEPTH in blob.cpp to test
Jérôme's github pull request POV-Ray#358. Longer term aim is to drive all returned
intersection depths from shape code to MIN_ISECT_DEPTH_RETURNED. The value
is automatically derived from DBL setting and at double resolves to about
4.44089e-08. On the order of the square root of POV_DBL_EPSILON.

Moving blob.cpp's inside test value INSIDE_TOLERANCE to POV_DBL_EPSILON over
previous recent re-calculation. Over time plan is to move EPSILONs best
nearer a double's step to POV_DBL_EPSILON.

Cleaning up doxygen documentation added during recent solver related
updates.
@wfpokorny
Copy link
Contributor

An update after some initial digging and testing.

I'll start by saying on going back and reading through the #121 and #125 issues, Jérôme had already determined MIN_ISECT_DIST was created to fix an issue with the glass in the balcony.pov scene. An issues created early in 3.7 development when we started to do a >= SMALL_TOLERANCE test not done in v3.6 or prior. Unfortunately, this time period falls in a gap in time for which I don't have change logs or commit logs - and I don't have beta releases prior to beta-33 for v3.7. I'm still unsure why it was initially added, it certainly causes problems.

Having run about 2/3rds of my solver test cases, things mostly work using the pull request straight up but merged into my fixed solvers branch. There were expected changes and a few surprises - including a set of blob issues into which I need to dig some more. The #125 issue is fixed in that POV-Ray would again match the more accurate v3.6 result with this update.

As somewhat expected some scenes with meshes showed subtle differences. The balcony scene (csg with cylinders, glass, liquid) further improved with respect to better matching v3.6. The MIN_ISECT_DIST 1e-4 over SMALL_TOLERANCE at 1e-3 made for a closer result to v3.6 only to a first order as Jérôme said too.

In a surprise self shadowing appeared in some of my sphere_sweep test cases that I was able to fix with a larger tolerance value. For a long time I - and others - have found the sphere_sweep tolerance control of not much use. The reasons appears to be that this new to v3.7 'depthstack closest' test was overriding any smaller than 1e-4 tolerance (excepting SSLT rays).

Another surprise is axis aligned fallout with even very basic non-sturm blobs. As discussed elsewhere I'd pushed the blob.cpp intersection depth smaller to about 4e-8 and this was working well until I merged in this pull request. It might be I need to look at making the solve_cubic and solve_quartic solvers more accurate as I did with solver_quadratic, but that's a guess based on the fact the blob sturm results are all clean - I need to dig some more.

I also plan a custom build or two where I add throws on seeing <0, <=0 values just to see if they occur in any of the scenes I have. v3.6 didn't test for >0 so perhaps we need not.

Oh, I had one csg case involving a height_field show a slight difference too, but that is probably mesh driven.

I'll further update as I figure things out.

@wfpokorny
Copy link
Contributor

Update on blob (solve_quartic) issues seen with my solver updates plus the changes in this pull req.

Found the solve_quartic() tolerance on the roots it finds is somewhat large. In the global space amounting to as much as +-0.1 for unit-ish blobs scaled up to near maximum size(1e7). The core issue is the tolerance is loose enough that, for certain near axis aligned ray-surface equations, the root values found wander from <0 to >0. The inaccurate >0 roots get used as the closest root - unless filtered.

Yes, I now think this the reason for that large, other issue causing, blob.cpp min intersection depth of 1e-2 that's been sitting in our code for so long. Even the 1e-2 is not big enough as can be seen in the master branch with a 10000x scene scale up the Gail Shaw blob media test case that kicked off my digging into the solvers.

After a couple failed attempts at updating solve_quartic itself, the solution I settled on - and which I'm still testing - is to polish the solve_quartic roots with the Newton-Raphson method before returning the roots from solve_quartic (non-sturm version of lemon performance benchmark scene +6.2%). With this change all my previous issues are clean. Now running test scenes I'd not previously.

Understanding the cause here got me wondering if the reason we ended up with the original > SMALL_TOLERANCE filter in object.cpp and trace.cpp wasn't basically this very root tolerance issue - the tolerance scales as the scene scales. There was that experimental 3.7 beta version with a much larger supported range for solar system renderings I think. There are commit messages related to yanking those changes back out. Maybe not every changed was reverted? Something like the > near zero, root filtering, object.cpp and trace.cpp updates would likely have been needed for that experiment given this solve_quartic root tolerance issue - and the previous state of polysolve/sturm.

Aside: Suppose we might add root polishing to solve_ cubic() too - if its roots are found to have poor tolerance compared to the fixed sturm/polsolve results. Fun for another day.

Bill P.

@wfpokorny
Copy link
Contributor

wfpokorny commented Jun 26, 2018

I'll be checking the solve_quartic() - Newton-Raphson root polishing change into my solver branch sometime in the next few days. It held up well across all my test cases. In fact, it fixes most of the very low threshold blob cases which showed additional artifacts due my pulling the difficult coefficient code doing the hidden promotion to sturm/polysolve. I was able to improve performance of the polish step too. The hit for the change in the non-sturm lemon scene is now 1.5% as opposed to 6%.

Most coincident intersecting surface issues involving all blobs are now gone. This a combination of the changes suggested herein, my updated blob code which is now returning intersections depths as small as 4e-8 now instead of the old, issue causing, 1e-2 and the solve_quartic() update. In resolving intersections more accurately we are accomplishing - with blobs only at this point - what many do by scaling scenes up by 100 or so to fix(numerically hide) coincident intersecting surface issues.

Not all in the clear! I have a few new coincident intersecting surface artifacts showing up that I've not explored. These are maybe related to the bounding change proposed with this change request by where they show up, but we'll see.

I've also yet to do the throw tests looking to <=0 root values. I believe it should be each shape's code filters roots to an internal intersection depth >0 (today usually 1e-4 to 1e-6), but again we'll see if true. If true, we can I think drop the if SSLT ray test and MIN_ISECT_DIST altogether instead of setting it to 0.0. In other words, we'd be able to go back to a v3.6-ish approach.

Aside: Though I've not yet noticed tolerance or accuracy issues with solve_cubic(), I spent yesterday making an attempt to replace that code with a Newton-Raphson based approach. I might look at it again, but in the time spent, I couldn't find any variation of code - with at least equivalent performance - not sometimes hitting the convergence issues warned about in books and papers when one cannot reliably provide good starting near-root-seeds or bail and correct on unfortunate guesses.

@wfpokorny
Copy link
Contributor

Blast it all... I was running through my test cases with the coefficient reduction off for everything today because that is still where I'm aiming to go with the solver code. The lemon and ovus test cases are failing due the changes in this pull request - specifically the changes to trace.cpp. The sturm scenes are failing too and these have been solid for a while which leaves me confused. Looks a little like a thread safety issue, but while +wt1 changes the signature to something less block-ish, it doesn't fix it. Example Image attached.

This will be what I'm looking at later this week I guess.

a

@LeForgeron
Copy link
Contributor Author

LeForgeron commented Jun 27, 2018 via email

@wfpokorny
Copy link
Contributor

Yes, Ubunutu 16.04. The image was with 4 threads. Thanks much for the ideas, but woke up this morning with the thought it was likely the shadow cache mechanism - and it is. When I disable it completely in trace.cpp everything works fine.

What I don't yet understand is why the ovus and lemon are so sensitive to whatever the shadow cache issue is. I'm short on time today - perhaps Friday or Saturday I can find more.

Aside: I was somewhat familiar with the shadow cache because I still suspect it for certain dark pixel media issues I was looking at back in February... I think that work is why my brain jumped to the shadow cache possibility on waking.

I believe the shadow cache mechanism in POV-Ray is a bad idea. No doubt it helps performance (-19% non-sturm lemon scene), however, it assumes the color calculated for the light is correct when the cache value is set. It does this in a numeric environment where assumptions will never be completely safe if one wants the most accurate results. Users should be able to turn it off - or the feature should perhaps be off with the highest quality render setting.

@LeForgeron
Copy link
Contributor Author

About the ovus and lemon being sensitive, it might be because they are magnified part of torus (4th order equation), they need a lot of precision due to the magnification. I did not study the cache system (I'm unaware of it so far).
Usual torus are easier, because they are viewed from afar. But lemon and ovus would use kind of pathological major circle and minor circle (most likely similar values, just a bit of offset between them), and minor circle is bigger than major circle too !

@wfpokorny
Copy link
Contributor

wfpokorny commented Jul 2, 2018

Thanks. Yes, might be part of the problem. I've been digging on and off over the weekend and earlier today. No overall conclusions yet, but found and - I think - fixed an oscillation issue with the solver root polishing code which resulted in there not being enough iterations - with reasonable limits -to always polish all four roots.

Not pulled the updated polishing code back into POV-Ray in a way I can test as yet. In part because I've been looking at related issues.

For one I got to checking whether we had <= 0.0 roots from any shapes which would require us to continue to filter to >MIN_ISECT_DIST(at 0.0). Running a couple hundred test cases I think cover all shapes; I found two of the three parametric test scenes I regularly run returning <0.0 roots. Only one other scene returns <0.0 roots - my base performance lemon.pov (solve_quartic) test case.

As I had code where I'd disabled the shadow cache and now a quite large set of root solver test cases, I ran through all with the shadow cache off. Most OK even if occasionally different shadowing. In a few scenes a surprise where the shadow cache being on is covering for root solver issues.

One such scene being a problem superellipsoid scene Severi Salminen posted back in 2003. In that scene - still with shadow artifacts at some scales - good values sometimes get cached which 'hide' later bad results. More digging to do, but at the moment it looks like bad results are bad and so not cached. Sometimes, when good results are - they are later covering for bad/missing root results.

Today, I looked specifically into the <0.0 roots issue of my lemon.pov scene. It's basically your demo scene with the jitter off in the area lights to be able to compare results. What I see for one pixel with a <0.0 root is the following form for the roots of shadow rays only.

With lemon.pov using solve_quadratic() :

core/shape/lemon.cpp 238 r[3] = -0.0314257
core/shape/lemon.cpp 238 r[2] = -9.74588e-10
core/shape/lemon.cpp 238 r[1] = 0.363373
core/shape/lemon.cpp 238 r[0] = -1.48089
====<<<<Intersect core/shape/lemon.cpp 353 returning i = 2
core/shape/lemon.cpp 138 I[0].d = -0.0314257
core/shape/lemon.cpp 138 I[1].d = -9.74588e-10
====<<<<All_Intersections core/shape/lemon.cpp 158 returning 1

With lemonSturm.pov using polysolve() :

core/shape/lemon.cpp 227 Solve_Polynomial found 1 roots.
core/shape/lemon.cpp 238 r[0] = 0.363373
====<<<<Intersect core/shape/lemon.cpp 353 returning i = 0
====<<<<All_Intersections core/shape/lemon.cpp 158 returning 0

The discrete solvers return both positive and negative roots while polysolve only positive roots. The roots to me look good given the ray-surface equation for both solvers. Better I think if lemon.cpp ignored negative roots from solve_quartic() up front so they don't later need to be filtered.

There are three area lights in the scene and many shadow rays cast. We should - I believe - be finding one surface intersection for at least a few of these shadow rays, but lemon.cpp sees none given how it handles the returned roots. My plan is to substitute a similar shape - torus-intersection or lathe - at the pixel position and again look at shadow ray roots. Should provide a rough baseline for what we should have for shadow ray results. However, I'm busy with real life until late this week so this experiment won't happen for some days.

Wondering if ray-surface equations for shadow rays with origins 'at' the lemon surface are often such that lemon.cpp's intersection doesn't come up with single surface intersections??? As with the superellipsoid scene above, perhaps we are shadow caching invalid results given we don't reliably have good ones and so we get messy on shape results.

Unfortunately, the <0.0 root pixel (picked by a triggering throw) was not at a pixel position obviously seeing self shape shadowing... Might be my polishing fix has addressed the shadow cache results and I don't yet know.

Much speculation at the moment - more facts when I can get back to this in a few days.

Edit (July 03, 2018)

Had the thought last night you might like to see the roots for the camera ray at the pixel which are working OK and have the 'closeness' you expect.

With lemon.pov using solve_quadratic() :

core/shape/lemon.cpp 227 Solve_Polynomial found 4 roots.
core/shape/lemon.cpp 238 r[3] = 13.0577
core/shape/lemon.cpp 238 r[2] = 15.2218
core/shape/lemon.cpp 238 r[1] = 13.8543
core/shape/lemon.cpp 238 r[0] = 13.8493
====<<<<Intersect core/shape/lemon.cpp 353 returning i = 2
core/shape/lemon.cpp 138 I[0].d = 13.8543
core/shape/lemon.cpp 138 I[1].d = 13.8493
====<<<<All_Intersections core/shape/lemon.cpp 158 returning 1

With lemonSturm.pov using polysolve() :

core/shape/lemon.cpp 227 Solve_Polynomial found 4 roots.
core/shape/lemon.cpp 238 r[3] = 15.2218
core/shape/lemon.cpp 238 r[2] = 13.8543
core/shape/lemon.cpp 238 r[1] = 13.8493
core/shape/lemon.cpp 238 r[0] = 13.0577
====<<<<Intersect core/shape/lemon.cpp 353 returning i = 2
core/shape/lemon.cpp 138 I[0].d = 13.8543
core/shape/lemon.cpp 138 I[1].d = 13.8493
====<<<<All_Intersections core/shape/lemon.cpp 158 returning 1

I'm off to real life for a few days.

@wfpokorny
Copy link
Contributor

OK. The lemon and ovus shadow cache sensitivity is due not filtering near zero roots to a min intersection depth on the torus roots. I believe, but have not verified, the shadow cache is responding as it does due the solvers sometimes coming up with bogus near zero roots for the origin-on-surface shadow rays and sometimes not - in which case the correct root is used. The value cached for the thread setting whether there was dramatic noise or muted noise in the local box of pixels (multiple lights).

Not the final form of things, but if in lemon.cpp I do this:

// const DBL DEPTH_TOLERANCE = 1.0e-6;
const DBL DEPTH_TOLERANCE = MIN_ISECT_DEPTH_RETURNED; // 4.4e-8

And later in Lemon::Intersect I add a conditional continue:

        while (n--)
        {
//Adding the following conditional continue. 
if ((r[n] <= DEPTH_TOLERANCE))
{
    continue;
}
...
        }

Results look good running the modified solvers, no coefficient reduction, the change herein and the shadow cache active.

Solvers all have tolerances. Roots see scaling. Near zero negative roots can drift positive. Magnitudes depend on the given solver, the shape's given ray-surface equations and normalization/scaling. In short, this is why we have many different per shape DEPTH_TOLERANCE values and why every shape needs to filter their 'drift-positive roots.' I'm sitting on a rough draft of a newsgroup post which I hope will better describe what I see along with my recent commits.

I'm still twiddling with the new root polishing code - which can often correct the solver root drift enough that smaller DEPTH_TOLERANCEs and larger shape scale ups can work. Here my initial efforts in some cases pushed past the available numerical accuracy... The oscillations I was seeing - and for which I hacked together a useless patch - is one such case.

I'm busy this weekend so it'll be next week before I pick up work again.

@wfpokorny
Copy link
Contributor

wfpokorny commented Jul 15, 2018

Update. Resumed working this and the solver updates last week. In my local space I've updated ovus.cpp, lemon.cpp and parametric.cpp to filter <=0.0 roots and in the case of the ovus and lemon to filter to much smaller returned root values on gkMinIsectDepthReturned (4.4e-8 and formerly called MIN_ISECT_DEPTH_RETURNED). In some days of running test cases, code looking for <=0.0 roots hasn't triggered any of these 'throw traps.'

Looking like the right thing is to remove MIN_ISECT_DEPTH and the SSLT exemption conditionals completely where now >0.0 due MIN_ISECT_DEPTH being 0.0 in this pull request.

As hinted at earlier here and posted about recently in some detail in the p.b.i newsgroup, we need to filter smallish, near zero roots which end up as positive roots. However, this is best done in the shape code while planning for scale ups of the shapes (the roots/intersections) to as near 1e7 as possible in the global space. Different shape code and shape solver use and we end up with different filtering needs per shape.

I'm currently working through all the shapes using Solve_Polynomial removing the calls and with them the last of the coefficient whacking code in the trailing coefficient reduction stuff - the not 0.0 epsilon triggered stuff. Mostly looking really good. Changes fix the lemon reduction artifacts and generally allow more accurate roots to be determined in all the shapes updated. See the attached image with another lemon example top and an old lathe-media case for which the newsgroup work-around was previously scaling the entire scene larger.

Unfortunately, in tests cases other than the lemon one for which I previously posted the image, I'm seeing bad results with smaller, more accurate roots due how the shadow cache code is working today(1). Trouble with self-shape shadow rays on other shapes too and I now believe the shadow cache should be off always for self shadow rays. Looking at the shadow cache in more depth yesterday, it looks like there is in fact a self-shape-shadow filter there, but it must not always be working for reasons I hope I can determined and fix.

pull358progress

(1) - There are arrangements of lights relative to surfaces which create roots causing the shadow cache to act improperly. All multiple light cases so far. Note. This problem isn't new due changes like this pull request and the solver updates. It's just seen more often with more accurate roots being returned - and smaller, possibly scaled, roots not later being filtered to a 'MIN_ISECT_DIST' value in more global spaces.

@wfpokorny
Copy link
Contributor

Quick update. I've not gotten to looking at the shadow cache obstacle to generally more accurate returned roots. Got hung up on the polysolve/sturm adjustments for collapsing dual roots with lathes and sphere_sweeps - the problem of root isolation in such cases. Ideas for doing this automatically instead of the user having to determine and pass values by experimentation have proven harder to 'tune/get-right' across all my test cases than I expected from prior spot testing.

Now have something close I think, but yesterday I had the idea to create a few media test cases for sphere_sweeps - because I had none and wanted to be sure the more accurate returned roots were OK with expected scaling. Well, it turns out these media test cases are generally broken in the current 3.8 master and in all this new solver/root filtering stuff no matter scaling.

I can fix media ray origin on surface second surface (effectively single) root issues by turning off a single surface adjustment in the code, but this adjustment off re-breaks fixes for recent csg and end discs artifacts. Digging more into the sphere_sweep code. I expect this will take me some time.

allobject_set_medians

@wfpokorny
Copy link
Contributor

wfpokorny commented Aug 4, 2018

Busy in about 15 minutes with real life, but in that time a quick update.

On the shadow cache issues both old and with more apparent ones with more accurate / non-filtered root results... I'm still working with the code, but turned out there was more than one issue.

First, the code on caching a shape was never looking to remove it from the cache once rays missed. This caused both pointless root finding and, when the shape continued in the bounding box grouping, double testing of rays on misses (16% hit on the lemonSturm.pov test case). Further cached results were not filtered to the same post condition. So, ray intersections which to the set up should have been ignored normally ignore due the SHADOW_TOLERANCE were it did not for the cached object intersections. because no post condition on the intersection was being used for those.

The latter sometimes covers for failed / see through near intersection, other object, shadowing (was not bad - or badly ordered - root results as I thought). Thinking now we want to keep SHADOW_TOLERANCE though it's been the same value as SMALL_TOLERANCE for 25 years plus. We want I think a very small SMALL_TOLERANCE once all the solvers and shapes support more accurate roots, but still - on shallow self shape intersections only - a larger SHADOW_TOLERANCE where we ignore shallow self intersections within that distance to hide/smooth the numerical noise which becomes more apparent with more precise root results where the to light ray is shallow relative to the surface.

Up next to work on that last, smarter shadow tolerance filtering. Got to run.

@wfpokorny
Copy link
Contributor

wfpokorny commented Nov 2, 2018

Happened today to run the benchmark scene against the changes proposed herein, plus the shadow cache fixes and solver updates. Not tested often because it's slow to render and with default jitters doesn't it doesn't cleanly compare rendered image to rendered image.

Anyway, the stored photon count more than doubled with the code including the change herein causing a somewhat significant change in resulting image and a slowdown of 5-7%.

The solver updates are unlikely to be involved as the benchmark scene mostly uses features not tangled with those. Might be the shadow cache fixes I suppose, though not apparent to me how those would change the number of stored photons.

Thinking it probably the MIN_ISECT_DIST change herein - or my version of it and how photons interact with the water. MIN_ISECT_DIST was originally introduced to partly fix (make more like 3.6) photons related to the glass drink in the patio scene.

TODO: Compare to a 3.6 benchmark image. If image from updated branch similar, we are OK. If not, work to understand differences.

  • Update Feb 16, 2019. In working to get separated commits to my solver branch did testing here to verify the increased deposited photon counts are not related to solver updates, associated shape updates nor shape code updates needed to enable my version of this pull request which eliminates the MIN_ISECT_DIST and associated tests. Means the change is due the update similar to that in this thread or perhaps running with a smaller SMALL_TOLERANCE. As I work through other commits in my solver branch I'll better know what change(s) in particular cause the increase in deposited photons.

wfpokorny added a commit to wfpokorny/povray that referenced this pull request Feb 1, 2019
Near term need to use a value not MIN_ISECT_DEPTH in blob.cpp to test
Jérôme's github pull request POV-Ray#358. Longer term aim is to drive all returned
intersection depths from shape code to MIN_ISECT_DEPTH_RETURNED. The value
is automatically derived from DBL setting and at double resolves to about
4.44089e-08. On the order of the square root of POV_DBL_EPSILON.

Moving blob.cpp's inside test value INSIDE_TOLERANCE to POV_DBL_EPSILON over
previous recent re-calculation. Over time plan is to move EPSILONs best
nearer a double's step to POV_DBL_EPSILON.

Cleaning up doxygen documentation added during recent solver related
updates.
wfpokorny added a commit to wfpokorny/povray that referenced this pull request Feb 16, 2019
Working on possible further improvements, but those likely quite far out in
time. In total dozens of issues addressed. New solver call structure.
Solvers themselves more accurate and aligned better with common practice.
Additional root polishing. Corresponding updates to shape code while working
to extend scaling range for shapes. Changes encompass updates needed to
support a commit to follow which covers pull request POV-Ray#358 and its associated
issues.

Generally sturm option now much faster. Also true due improvements to the
fixed solvers that the sturm option is less often necessary.

The sphere_sweep now supports a sturm option. Here sturm runs the sturmian
solver in a two pass approach which, for certain scenes, will work where the
previous single pass sturmian solver did not. Unlike other updated shapes,
the sphere_sweeps scaling accuracy range was not much improved due a
decision to leave other parse time optimizations for run time performance in
place.
wfpokorny added a commit to wfpokorny/povray that referenced this pull request Feb 17, 2019
Fix for issues POV-Ray#121, POV-Ray#125 and several related newsgroup reports as well. Mostly it restores 3.6 behavior with respect to intersection depths filtered.

Note! Scenes using photons will often run slower and look somewhat different due additional photons being deposited. This includes our benchmark scene.
wfpokorny added a commit to wfpokorny/povray that referenced this pull request Feb 21, 2019
Eliminating SMALL_TOLERANCE in addition to MIN_ISECT_DIST. This partly
follows up on a question Jerome asked either in POV-Ray#358 or some related issue.
Namely, why not zero or near it for the Intersect_BBox_Dir calls in
object.cpp. Larger values do indeed cause artifacts with secondary rays as
especially noticeable in scenes with media. It's one of many causes for
media speckles. Pull POV-Ray#358 as originally submitted moved from a
MIN_ISECT_DIST value of 1e-4 to SMALL_TOLERANCE of 1e-3 making the inherent
problem worse. SMALL_TOLERANCE had also been adopted a few other places in
the code. In those cases moved to gkDBL_epsilon or gkMinIsectDepthReturned
as appropriate.
@wfpokorny
Copy link
Contributor

Alright. Been slowed. Turns out my 17 year old APC UPS had become undependable and was periodically taking out my machine. Replaced it and now back to documenting a couple more issues related to the code changes in this pull request or similar ones.

One of the differences which turned up in my testing for this change was in the shipped diffuse_back.pov scene as shown in the first row of the attached image (4 threads & radiosity so noisy). The middle row using a modified scene shows more clearly what happens on pull 358 like changes. Some number of photon rays are more or less bleeding through the half cylinder. They do this because sometime during the v3.7 development the previous v3.6 setting:

const DBL Cone_Tolerance = 1.0e-6;

in cone.cpp was changed to

const DBL Cone_Tolerance = 1.0e-9;

and this value is too small(1) for some secondary rays. Using the new - and good rule of thumb - numerical value for doubles of gkMinIsectDepthReturned works OK best I can see at the moment. The bottom row shows the now mostly v3.8 matched result.

Ignoring that the MIN_ISECT_DIST filter causes numerous other issues, filtering solver numerical issues after transforming back into world coordinate space will always be hit or miss due the fixed value. In the case of cylinder/cones it was previously filtering the bad near zero roots in the diffuse_back scene. In any case, the global numerical space is the wrong place to filter shape/solver numerical issues.

Important too with respect to any scene with photons is to understand intersections were wrongly being ignored due the previous MIN_ISECT_DIST filtering. This caused photon ray chains to be dropped which should not have been dropped. Visually comes usually to less smoothness in result so hard to pick up, but yeah results are less good due it. And yes, this one reason the benchmark scene and others are seeing the photon counts increase.

Anyway... I've updated my solver branch with a cone.cpp update to go with the other shape updates driven by changes for this pull request.

Aside: Often when comparing photon scene results, I've found I've had to decrease photon spacing or increase count along with autostop 0 for decent comparisons. True for diffuse_back.pov too.

(1) - I initially got fooled here too in thinking 1e-9 OK. First thing I did on seeing the initial different diffuse_back.pov result was go code up a bunch of my now standard scaled up and down media scenes for cones, cylinders with perspective and orthographic cameras. These worked fine. I proceeded to burn an embarrassing amount of time chasing ghosts before working my way back to it being a numerical issue. What I learned - again - is the numerical difficulty of secondary rays changes as the scenes change. Here the angle of the incoming photons (perhaps IOR to a degree too) was what exposed the issue. Thinking maybe in addition to the media test cases, photon ones for each shape might be useful too, but no time - a multiplier on hundreds of test cases to do even minimally... A bezier prism (my updated version plus fixed solvers) equivalent to the diffuse_back.pov cylinder in the diffuse_back scene is OK for what that's worth.

conestory

@wfpokorny
Copy link
Contributor

Issues related to the shipped grenadine.pov scene and the previous comment and change to cone.cpp. See the attached image.

The scene pushes the cylinder representing the liquid into the glass cylinder difference by a very small 1e-8 value.

#declare Liquid= merge {
   sphere {-y*0.8,Ri+0.00000001}
   cylinder {-y*0.8,y*0.6,Ri+0.00000001}
...
}

If we adopt the #358 pull changes more or less straight up we get the result in the middle top row which is different (more correct) than the v3.7/v3.8 (at 054e75c) result in the upper left. We get this changed result because Cone_Tolerance is still sitting at 1e-9 and so the second glass and second liquid intersections are now kept. These two intersections were skipped previously due, in some POV-Ray versions the Cone_Tolerance value being 1e-6 and more recently due the intersection depth filtering removed in this pull request or similar. In other words there are 6 surfaces, but due differing reasons, many photon rays shot have been seeing only 4.

On changing cone.cpp from Cone_Tolerance (1e-9) to gkMinIsectDepthReturned (4.4e-8) we again match v3.6/v3.7 results as shown in the middle row of the image - again for the reasons v3.6 worked.

In the bottom row of the image are versions of the grenadine.pov scene where the liquid is pushed about mid-way into the glass. With all v3.7+ versions the photons shot reliably see all 6 surfaces.

The interesting bit to me here is - in a way similar to the SHADOW_TOLERANCE issues discussed elsewhere - we should perhaps not ignore / or ignore less large intersection depths when the intersections come from different shapes/surfaces. Said another way, the intersections being ignored with the +1e-8 numerical game being played in the grenadine scene are very likely accurate enough to keep. The intersections are more accurate because the ray origin starts on another - and offset - shape/surface.

grenadinestory

@ingoogni
Copy link

ingoogni commented Mar 7, 2019

Fwiw, rendering 05-04-1999, POV-Ray 3.1. The intersection depth between liquid and glass was tweaked a lot to get this result.
cocktail3

@wfpokorny
Copy link
Contributor

@ingoogni - Interesting. Thanks for posting. I've not got a version of 3.1 running at the moment, but do have versions of 3.5 and 3.6 (3.6.1). A detail I left out of my grenadine scene story is that the bottom row of the three row image I posted doesn't work properly in 3.6.1! In pushing the liquid further into the glass than 1e-6, I got a mostly black image with some white bright parts.

Not going to dig. Guessing something got changed for 3.7 which changes the result when the ordering of shape surfaces overlap. Could be other things - including me screwing up. Saw the result and walked away... :-)

A second detail left out is I tried the liquid inside the glass cylinder difference which to my thinking would be the physically more correct approach if I'd used a distance >4.4e-8 but close. In hacking I'd used the larger 3rd row image distance. The photon ray focusing is about the same as I expected, but the coloring and reflections were quite different. Didn't sort all the reasons given the photon ray focus looked to be more or less consistent with my expectations - which was all I was trying to check.

Yeah, I think getting a good result for this sort of liquid in a glass set up tricky in any POV-Ray version given you have to push the surface relationships close to numerical capabilities for best result. A little easier to do in my solver branch given the work to make the minimum returned intersection differences as small as possible and of a much more consistent magnitude (1e-10 to 1e-2 now almost all to 4.4e-8 for the shapes I've updated - sphere_sweep still allows user control though I want to rip this feature out because it doesn't completely work today). Plus, I have my own version of this pull needed to get less than 1e-4 our of any v3.7+ POV-Ray. Plus, I got rid of SMALL_TOLERANCE which was sitting way up at 1e-3. Plus, plus, plus; The details go on and on. I'm going to stop for my own sake as much as yours - or anyone else reading. :-)

@ingoogni
Copy link

ingoogni commented Mar 8, 2019

I.i.r.c. Ron Parker once posted a method for getting rid of coincident surfaces on, I think, alt.graphics.raytracing. With that all this would probably/hopefully vanish.

@wfpokorny
Copy link
Contributor

@ingoogni, I see the issues here as broader than the traditional coincident surface issue, but, if you can find an actual pointer to the proposal, I'll certainly give it a read.

I've been thinking too about additional csg methods brought out in the SDL - in large part because I think it has the prospect of being more computationally efficient than methods trying to sort out / manage floating point calculation issues.

In one approach something like blobs using new functions/methods for grabbing the individual ray-surface equations (and spherical ranges of influence) from inbuilt shapes and adding them together before solving a single ray-surface equation - globs.

Or thinking about something like a glass filled with liquid scene, methods to define surfaces apart from shapes defining the interiors. In this latter case single-surfaces would be found by usual ray-surface equations and then the shapes with interiors would be inside-tested slightly ahead and behind (> numerical min accuracy) to the intersection to get interiors.

Generally I'm thinking it's probably better to have alt-csg methods so as to not create numerically-coincident surfaces in the first place.

The first idea I've played with a little as allowed by the current blob mechanism. It's essentially the base trick used in the blobblur.inc stuff. Otherwise, just wild ideas at this point.

@wfpokorny
Copy link
Contributor

Despite saying I wasn't going to do photon test cases for each shape after finding the photon bleed through issue with the code/cylinder, I have slowly been adding at least a 1x base case for each shape. Been going well until today.

With the lemon and we've got issues which show up in both the existing v38 master branch at commit 74b3ebe and in the updated solver branch.

The issue is tangled in the returned intersection depth stuff which I cleaned up only as shown necessary in my solver branch testing. This why the intensity of the bleed through increases I'd guess. I think the core problem though is somewhere in the internal select the right roots lemon code. I tried a couple quick, half blind, hacks this morning to fix it but no luck. Might also be the lemon just needs a larger returned intersection depth than other shapes - didn't try that yet and there is some exposure to causing other issues the solver branch fixed if needs push that way.

I'm busy with real life now until later this week. Posting with an attached image of the current state for now. The v3.8 result is on the left. The new solver branch result in the middle. Both showing photon bleed through exists in the lemon code as it stands today no matter the version. Guess fleshing out these photon test cases a good idea...

LemonA_to_LemonB

@wfpokorny
Copy link
Contributor

On resuming my look at the lemon photon bleed through, I started to wonder why the newer form of the ovus was not showing the bleed through too. On looking again at that test case and found I'd not aligned the middle, extended part, with the photons coming through the slot. When we add the near case in the attached image doing just that, we do get photon bleed through in both the exiting v3.8 code and the new solver code. Both lemon and the current v3.8 form of ovus shapes have this numerical issue using sturm or not.

Aside: On looking at the ovus again, I created a test case for the clamped spherical form of the lemon. It's OK with respect to this particular form of test case.

OvusNSA_to_OvusNSB

@wfpokorny
Copy link
Contributor

Well (pick your favorite profantiy)!!!

The numerical hole gets deeper and deeper. I've been most of the day thus far adding these newer sorts of photon test cases. Got to the sor and see the attached image where the MIN_ISECT_DIST filter was scrubbing pretty much all these sorts of numeric issues...

And - given the intensity increases with MIN_ISECT_DIST gone - at the cost of dropping a good many photon rays. In other words, I think where photon rays got past the filter they landed more or less in the right spot, but many photon ray chains have been getting dropped.

Without the filter we keep all the photon chains, but many (most even?), see the wrong number of sor surfaces - three surfaces in fact. See attached SorIOR1pt3TrvLvl3.png with max_trace set at three so only the two surface sor photon rays reach the room's floor. This result is with the new solver/358 code and it looks about right for focus. Yes, though with only a few various IOR test cases, looks like IOR is playing a part in the numerical difficulty of the ray-surface polynomials.

Some general goodness can be seen in the v3.8 vs new solver+pull like 358 image too. The reflections in the glass are significantly cleaned up with the new solver code.

If I focus on test cases, it will be days yet to flesh out a reasonable set. More probably, I'll continue to work on these test cases part time. Lathes, sphere_sweeps not done at all. Prisms just a few exist and none taper where I expect bleed through might show up there.

Once a reasonably complete set of test cases done, I can reasonably start to look at the photon specific numerical issues - to look at the actual polynomials being created in these cases. Maybe they're somewhat like the long telephoto (very small perspective angle (test cases are at 2 degrees) issues I do still have running around in some of my solver test cases and with shadow rays- even after a lot of solver accuracy improvements.

I expect further updates here are a long while off.

SorA_to_SorB

SorIOR1pt3TrvLvl3

@wfpokorny
Copy link
Contributor

In working on additional photon tests for shapes got to the polynomial and realized I'd not done an object-media-interval (left column) vs object-media-by-inside-object (middle column) test case as yet. These two methods of defining media should roughly match and provide a good check of the inside shape test code which often uses shape and solver code different than that used for ray-surface intersections.

Created the missing test case and it turns out the inside object test for polynomials needs some part of my version of this #358 pull. Didn't dig further as the updated solver branch with my 358 is working. Documenting issue here with attached test scene and image. Image using current v38 master at commit 74b3ebe.

PolynomialInsideTestNeeds358LikePull

Polynomial_I.pov.txt

@wfpokorny
Copy link
Contributor

I have an initial set of photon test cases together and have started to look at the numerical issues. With the lemon and ovus shape code in v3.8 we have layered issues - a few less in the solver branch. One issue is in the way the lemon and ovus select the inner intersections of the torus. Much of the pass through is happening, not due the filtering of the on surface second, refractive ray, but rather that we are incorrectly dropping the initial photon ray-surface intersection. This can be seen in the top row of the attached image where the max intersection limit has been set to 2 and there is just a simple lemon (not the usual csg). At two the only way rays can reach the surface in the way they do is if the initial ray is seeing the bottom, flat surface only.

The problem is in this post solver intersection code and test trying to pick the inner torus part:

...
OCSquared = Sqr((horizontal - HorizontalPosition)) + Sqr((vertical - VerticalPosition));
...
if (fabs(OCSquared - r2) < ROOT_TOLERANCE)

I think it should probably be more like:

...
OCSquared = Sqr((horizontal + HorizontalPosition)) + Sqr((vertical - VerticalPosition));
...
if (OCSquared < r2 - ROOT_TOLERANCE )

However, I am testing the following which feels cleaner to me:

...
DBL effectiveIRadius = sqrt(Sqr(Ipoint[X]) + Sqr(Ipoint[Y]) + Sqr(Ipoint[Z]-VerticalPosition));
...
if (effectiveIRadius < inner_radius)

In the bottom row we have 380 on the left and the solver branch with the above fix where the max trace limit is back at 50. Plus, one possible approach to a fix for root filtering numerical problems under the issue above. Still unsure what way to best go with the more numerical, refractive ray issues - but all currently do something specific on refractive rays only.

LemonOvusStory

@wfpokorny
Copy link
Contributor

Status. Ran through all my photon test cases over the last couple days with the ovus/lemon intersection selection fixes and one possible change to help with refractive rays. The last being being bumping the origin of refractive rays forward by 2e-4 to put the on surface root at < 0. This, given our current code, a simpler change. If the idea eventually adopted, better to do this sort of thing only for the shapes which seem to need it and more importantly to bump the origin forward in the solver's numerical space and not the global space where a fixed value will cause issues depending upon scene scale.

Originally had v38/solver branch issues with sor, ovus, lemon and polynomial test cases(1). Changes above solve the test issues with the sor and ovus. The lemon bleed through is substantially dimmer. The polynomial torus case is only very slightly improved.

Further, the polynomial torus - the highest order now being tested - show first (or perhaps near root multiplicity issues) in effect like that now addressed for the ovus and lemon. The lower order polynomial cases I have are OK with the v38 and updated solver branches.

Attached is an image where we are running v38 at commit 74b3ebe in the left column. The updated solver branch in the middle column and the differences on the right. The first and second rows are showing the polynomial torus test case. The bottom row a similar torus shape test case.

First row sets the max intersections to two and like the current v38 ovus and lemon we are missing some intersections in a way which ideally should not happen. The middle row shows the max intersections at 50 case and we have too numerical near surface issues. The bottom row the torus shape case is OK (2).

PolynRefractStory

  1. Across the test cases I've created. I still have very few scene scaled up/down photon cases, for example. I'm already well beyond the days where I can routinely run and visually review the thousands of test cases I have.

  2. Unsure why the torus shape works and the polynomial torus doesn't. I'm not currently very familiar with the polynomial code.

  • I know the torus shape code has a hack to move ray origins near the torus surface for better accuracy where ray origins are well outside the containing sphere for the torus. More specifically, the hack is a - not completely general - way to normalize the polynomial's evaluation values. I've got a comment in place to look at removing this hack; it costs and can (not yet seen) somewhat un-polish the roots found on restoring the actual intersection depths.

  • With current solver branch changes it should be sturm no longer determines intersections by value. There was previously a ray domain, value domain mix causing surface smoothness issues among others. With the dedicated order <5 solvers we still do value domain evaluations, but these are now - I think - accurate enough not to need the torus code hack. All that said, the photon rays in play here are exactly the sort where the hack substantially changes the magnitudes of the coefficients. It's now on my list to look at the torus photon test cases with and without that hack in place. In any case it should be a good way to validate the removal of that torus code hack should these photon cases not prove to be a reason to keep it - or a more general form of it.

wfpokorny added a commit to wfpokorny/povray that referenced this pull request Apr 13, 2019
Details in the github pull POV-Ray#358 request discussion thread. See especially
comments made in April of 2019. Note, this fixes only one of a layered set
of reasons photon rays can bleed through all or part of ovus and lemon
shapes.
@wfpokorny
Copy link
Contributor

I've been relatively busy with real life of late and I'll be away from POV-Ray until the middle of next week. So, a status update on what I've learned thus far about the photon ray to torus hack and it's effects.

Everything said previously about effects on evaluated values, coefficient ranges, blah, blah, blah is true. However, in a somewhat of a surprise to me the hack is sometimes helping as can be seen in the top row of the attached image where the solver branch is run with the hack in place on the left, without it in the middle top and the difference shown on the right. Yep, the top middle looks quite a bit like the polynomial defined torus result.

In the second row again running the solver branch but trying to show why I've always considered that bit of code in the torus shape to be a hack(1). Namely, that the coefficients are only nice and normalized with respect to range so long as the user defines a more or less normalized torus near the origin to begin with. So, everywhere in the second row the hack is in place but, we define the base torus in the 0 to 1 range, the 0 to 1e-4 range and the 0 to 1e4 range, left to right.

I expected some bleed through like the middle top row in the column two and three cases. Instead, what we get is a good result for the scaled down case and complete bleed through on the scaled up case...

For reference in the last row we have the v380 result at commit 74b3ebe where the middle scaled down result is messy for several solver issues fixed in the solver branch (expected). The scaled up case on the right is acting like the solver branch scaled up case. Of course, the v380 branch is also always using the torus ray origin hack.

In the scaled down / up cases there is some complication. I've dug enough I can tell the photon ray shooting is changing depending on the initial torus scale for reasons - and in ways - I don't understand at the moment.

torusRayOriginHackStory

(1) - There are too second surface ray concerns given how the hack is coded for only the ray origin outside the torus case on not ray origin being on or inside the surface case.

@wfpokorny
Copy link
Contributor

wfpokorny commented Apr 25, 2019

The photon ray complications mentioned in the previous post were not related to the photon code, but rather a long standing bug (or two) in the torus code's Torus::Test_Thick_Cylinder. Throughout the method we have tests of the form:

if ((k > EPSILON) && (k < MAX_DISTANCE))

already in the solver branch changed to:

if ((k > gkDBL_epsilon) && (k < MAX_DISTANCE))

which should be:

if (k > gkDBL_epsilon)

because k is not intersection depth and k can be very large. A second issue/effect seen with a largish performance difference when the initial torus is defined at 1e-4 happens in the posted test case because the end cap tests don't test true but one of the last inner cylinder tests does. Meaning we pay the near full cost of Test_Thick_Cylinder and the solver code still needs to almost completely sort out whether there are intersections (+4.8% cost to the 1e-4 scene). Even with the cap cases fixed (the MAX_DISTANCE test gone) so we more or less immediately return true most of the time, the additional cost is still +0.7%. Didn't confirm the false outer/inner 1e-4 true also due the MAX_DISTSANCE test (suspect so).

Where tori defined more tightly around zero I could find no cases where Test_Thick_Cylinder was buying us any performance or accuracy. In fact, only where the original torus is defined substantially small (ie 1e-4) using sturm and with most photon rays quickly testing true on the cap test do we see gains(1). Where ray-torus intersections a substantial part of the total intersections the Test_Thick_Cylinder penalty usally runs +0.3% give or take a couple tenths. These test all done in the updated solver branch.

(1) - The gains possible here because there are currently two 0-1 min range clamps in the updated sturm solver. Namely, with the bound estimation and the adjusted trip point for the move from bisection to regula-falsi on root isolation doesn't work where the roots all <1.0 in depth. In other words, the updated sturm solver is still slow enough in the <<0-1 defined torus cases a working Test_Thick_Cylinder with a cap to photon ray origin case the internal bounding helps. If the polysolve issue fixed this likely no longer true.

The up front plan is to fix the bounding code, but no longer use Test_Thick_Cylinder. It doesn't often help in practice, and where it does it helps make plain the <<0-1 intersection ranges still need to be better handled in the updated polysolve code.

Instead of posting another image I'll just say with either Test_Thick_Cylinder gone or running with it fixed we good results for all the cases in the second row so long as the ray origin to near torus hack is in place. Without the hack we get polynomial like results for the column 1 and 2 cases and still almost complete bleed through, but now due the solvers struggling with the polynomial and not due bug(s) in Test_Thick_Cylinder.

Complications aside, the good news here is it's clear the torus hack (despite having itself some situational issues) is better conditioning the initial polynomial in a way making it much easier for the solvers to handle. The general idea of normalizing the ray origin to shape intersection distance is perhaps extendable to other shapes and situations. As mentioned elsewhere I've not yet worked much on the polynomial equation / more accurate coefficient side of things so to speak. The idea behind this torus shape hack plays/helps on that side of things.

Edit: Attaching a test scene for the torus Test_Thick_Cylinder bug.

Torus01_def10000.pov.txt

wfpokorny added a commit to wfpokorny/povray that referenced this pull request Apr 25, 2019
See late April 2019 comments to github pull request POV-Ray#358.
@wfpokorny
Copy link
Contributor

wfpokorny commented May 8, 2019

Been off looking at numerical accuracy and possible solver approaches and polynomial evaluation methods in general. Got back in recent days to looking more specifically at the numerical effects of the torus shape hack of moving the ray origins nearer the defined torus before calculating the polynomial coefficients.

Modified the POV-Ray code to capture 520 of the Photon primary ray-surface equations. Running these against POV-Ray solvers, my tcl solvers and sagemath f.roots() we get aligned root count results for everything with or without the move P(ray origin) closer hack. With the hack the results were much more stable and secondary refraction rays are calculated correctly - we need no changes to the intersection depths filtered for refracted rays. Without the hack the variability in roots / intersection depth varied a lot - so much so that a significant number of secondary rays saw again the first surface as a second surface intersection.

So... While the movement of P itself introduces inaccuracy to the ray surface intersection (1e-13/1e-14 for the scene set up and as currently coded) it does greatly stabilize the ray origin locations / intersection depths for the secondary refraction rays. The visual mess in result comes in due the repeated first surface intersections. Filtering these repeats is what drives the need to filter much larger minimum intersection depths on photon ray refraction in general with shapes.

Here we have hard evidence we should always be running the solvers in normalized space about the origin. Normalized shape space to prevent long length internal shape rays. Further, where ray origins are well off the surface we need to move them closer - for most all shapes I think.

There is inaccuracy when moving the ray origin closer and still looking at different methods for perhaps a better general approach. (First intersect with bounding spheres etc)

In two of the 520 rays I was lucky enough to pick up a great examples where all polynomial sign based methods are unstable due polynomial evaluation noise in the space where the intersections must be found. In the sturm bisection code there are comments with a patch/test-return(0) by CEY from the 1990s about such root count inversions being bugs. Not really, it is just that in the local numerical space where we are forcing the polynomial evaluations to run is noisy. We've exceeded the available numerical accuracy and values snap to what DBL can represent. Said another way, the polynomial values(1) at any given t have gotten really noisy/fuzzy and so the sign based iterative methods also become locally unstable.

(1) There are various evaluation methods. The one used in POV-Ray at present is a forward Horner scheme. I've looked too at backward and direct evaluation too - mostly they just move the noise around. With FMA instructions maybe evaluations methods less important.

The attached image shows top to bottom: The moved P closer hack evaluation space which is nice and clean; the NOT moved P evaluation space; Last, the NOT moved evaluation space where the coefficients have been somewhat normalized to smaller extreme values - so we get less random sampling value snapping and and a cleaner plot. In the bottom case we can see how we don't have two clean crossings in the evaluation space and how the iterative sign based methods can be extremely noisy with respect to result.

RayOriginAfarAndNot

The particular ray - torus surface equations and some data/results:

======= 000064815
 a : 1*x**4 - 5516.9240070709175*x**3 + 11413669.045912787*x**2 - 10494724327.602537*x + 3618662390166.4331

 b : 1*x**4 - 2.9999519525605454*x**3 + 3.4833813978271317*x**2 - 1.9168318408280143*x + 0.42758207302474094

             Root 1                       Root 2                   Root2 - Root1
 a     1379.2514406571435757    !!! 1379.2540619861852065   !!! 0.0026213290416308
 b     1379.2334492246759510        1379.5478114237391765       0.3143621990632255
       ---------------------------------------------------------------------------
          0.0179914324676247          -0.2937494375539700      -0.3117408700215947

a) No hack. P not moved closer. Middle/bottom graphs in attached image.
b) With hack. P moved closer to torus. Top graph in attached image.

All the sign based approaches are unstable for (ray-surface eqn a), but for example, below see the confused Fourier sequence results for some ranges near the non-moved ray origin roots. Ideally should see two clean ones '=1's ranges.

UPolyFourierSequenceRootCount 1379.20 <4> to 1379.21 <4> = 0
UPolyFourierSequenceRootCount 1379.21 <4> to 1379.22 <4> = 0
UPolyFourierSequenceRootCount 1379.22 <4> to 1379.23 <4> = 0
UPolyFourierSequenceRootCount 1379.23 <4> to 1379.24 <1> = 3
UPolyFourierSequenceRootCount 1379.24 <1> to 1379.25 <1> = 0
UPolyFourierSequenceRootCount 1379.25 <1> to 1379.26 <2> = -1
UPolyFourierSequenceRootCount 1379.26 <2> to 1379.27 <1> = 1
UPolyFourierSequenceRootCount 1379.27 <1> to 1379.28 <1> = 0
UPolyFourierSequenceRootCount 1379.28 <1> to 1379.29 <1> = 0
UPolyFourierSequenceRootCount 1379.29 <1> to 1379.30 <1> = 0
UPolyFourierSequenceRootCount 1379.40 <1> to 1379.60 <0> = 1

@wfpokorny
Copy link
Contributor

wfpokorny commented Jun 6, 2019

An update. While mostly busy with other things in the past month, I've spent time looking at the torus.cpp hack moving the ray origins closer for better accuracy. Better accuracy especially for downstream secondary rays. A couple new observations.

  1. My first push was toward using a first spherical intersection to move the origin closer, but found when the torus is say laying flat on the ground - and especially when it's more of a hoop than a doughnut in shape - the enclosing sphere as a technique isn't all that good. When the ray origin is approaching from above the hoop, a ray enclosing sphere intersection can still leave the ray origin well away from the torus numerically. The existing hack is somewhat worse than a spherical intersection in that once the ray origin moves just inside the enclosing sphere suddenly no adjustment is made. In other words, there can be a significant discontinuity in accuracy with the current approach.

  2. I coded up a version of the torus coefficient generation in TCL so I could play with differing approaches more easily. I've found coefficient values are not identical in a, more math, less alike, are the TCL and C++ coefficients - way. Coefficients are not wildly different, but different enough to be a significant part of the overall accuracy of the surface intersections - especially when the ray origin is far away from the surface. Suspect C++ optimization and our having fast math on with the *nix compiles is in play, but, don't yet know. I need to spend some time trying to better understand what's happening. Seeing this 'coefficient drift' also has me thinking about the new PRECISE_FLOAT compile hook. Specifically whether it should be extended into the shape code for those shapes using the common solvers. Today it's used within the common solvers only - and with the new root polishing, it's less impactful that it was when initially implemented.

Edit. June 07 2019 : As for (2), two causes. A typo in the minor radius input of one of my ray-torus test sets. Plus, I'd changed one equation which had read ... - a - b to essentially ... - (a + b) and where the ray origin was far off this was causing some coefficient wobble. The latter not an issue when ray origins are close.

@wfpokorny
Copy link
Contributor

An update as I'm away for a few weeks. I think I see a path to more accurate roots taking the hint from that torus.cpp patch. I've posted details of recent ideas/work on the newsgroup at:

http://news.povray.org/povray.programming/thread/%3C5d0f64ff%241%40news.povray.org%3E/

if interested.

@wfpokorny
Copy link
Contributor

Documenting. Any scene with large numbers of basic shapes (spheres, cylinders, etc) where the shapes are themselves very small may see a significant slowdown on adoption of a #358 like fix.

Details:
While working with a captured attractor scene (see closed issue #239) noticed a 20% performance hit with my updated solver branch. The particular scene is composed entirely of very small spheres. On investigation the slow down has nothing to do with solver updates, but rather my version of this #358 pull request integrated into that branch.

59.94user 0.64system 1:01.03elapsed 99%CPU <-- master...
59.17user 0.58system 1:00.25elapsed 99%CPU <-- pSlvr
71.34user 0.64system 1:12.47elapsed 99%CPU <-- pSlvr358 +20.57%

The problem in action is similar to the sphere's internal issue where spheres of tiny radius (<1e-2 say) can have intersection depths starting at the edges (rays tangent) which cause parts - or even all - of the shape to disappear. Intersection depths are today getting dropped due hard coded, at scene scale, filtering values.

See attached image v38 with only my current solver updates at left. My version of this #358 update - and a more correct image result in the middle(1). Differences at 10x intensity shown on the right. No AA used. AA differences where intersection pruning happening, I think, could be much larger or smaller depending on the scene and scalings - those experiments not done.

pSlvr_to_pSlvr358

@c-lipka
Copy link
Contributor

c-lipka commented Jun 11, 2021

@wfpokorny @LeForgeron Do you guys have a TL;DR on the current status of this pull request? Is it a clean solution of the problems at hand?

Given the lengthy discussion above, I presume it is not, but I'll give you a changce to convince me of the opposite before I close this pull request.

Any further discussion of the underlying problems might be better suited for the issue #121 discussions section.

@c-lipka c-lipka added the ToDo: user feedback action by reporting user required: needs more information label Jun 11, 2021
@wfpokorny
Copy link
Contributor

My view is to do nothing with this for v3.8. People aside from me and Jerome have not been routinely running this fix (me, a similar one in povr alongside 100s of solver and related shape code, fixes). Go with what's been in v3.7 and v3.8 for more or less the last decade.

For v4.0 I'd say merge Jerome's proposed fix herein as a start, and see what falls out(1).

To me it looked like someone fixed something related to intersections in the global space for v3.7... Filtering intersections in the global space not a good way to go as the bug referenced herein and other similar v3.7 / v3.8 ones demonstrate.

(1) - IIRC. In renders with lots of small shapes, the render time likely to go up as more valid / OK intersections involving those small shapes will be kept. Such renders might look different too if a significant part intersections dropped by the original v3.7 fix are tangled in the image result.


There are deeper solver/intersection handling issues behind a reversion of / and the original v37 fix, but for the purposes of whether to adopt Jerome's particular fix / retraction of the v3.7 change, they do fit better with particular issues for whatever might fall out once this pull (or similar) is merged - in v4.0.

Aside: Pointing to this pull req is a good solver related reference as there is a good amount of useful information in the discussion thread. The conclusion of which is that even with my "maxed" out solver improvements - there are still situations due ray length or elimination / reduction in a dimension of change where great, float hardware based, solvers alone are not enough.

I believe a more radical change rolling equation generation and the solver into one entity is the way to go - for accuracy as perhaps a mode beyond or replacing sturm. It's an approach which can work on cpu float hardware. I made a 'solver epiphany' news post about it years back - in the programming group maybe - but my only working code is some messy stand alone tcl code for the torus. The necessary changes to POV-Ray internals to adopt such an approach would be significant.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ToDo: user feedback action by reporting user required: needs more information
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants