What we need is a hardware infinite precision rationals unit.
unquote
If it were that simple . . . How to build even one component of such a
machine, memory capable of storing a single number, at infinite precision,
in a finitely sized space containing a finite amount of matter, as small as
Special Relativity demands our universe is, with lower spatial resolution
limited by the Heisenberg uncertainty principle, I shall leave to Nophead to
explain to all of us.
As far as I know, lies are most effectively told by not mentioning certain
facts - as Nophead did by not mentioning that our universe contains only a
finite amount of matter from which memory may be constructed. Had he
considered it, I don't think he would have lied to himself, as he so clearly
did.
If you want to do CSG operations (unions, . . .) on a computer, you have to
program a routine that handles the intersection of parallel planes, planes
that in Euclidean Geometry do not intersect at all. That isn't so difficult,
but what about planes that are just shy of being parallel? Mathematically,
the procedures needed to calculate the intersection involve a division, by
zero for truly parallel planes, by near zero, for nearly parallel planes. To
solve that, you need to understand Cardinal Numbers, and in particular how
to expand the concept of Cardinal numbers to sets of finite size. As far as
I know, CGAL uses just that understanding to sail around the problems. And
that means it uses rational numbers, or, if you allow me some inaccuracy, it
does not do divisions at all. (In a binary number system, divisions by any
number other than powers of two leads to infinite series of digits, and thus
imprecise results.) And these mathematics tricks, otherwise known as
arbitrary precision arithmetic, take lots of computing time, according to
one source 1000 times as long as without them.
If this number is correct, then parallel computing is not an answer to slow
rendering times either, as you may be able to increase speed say 10fold, but
not 1000fold unless you invest a lot of money in a large GPU farm,
containing hundreds of GPU. Who would be able to afford that?
So here you have another lie: parallel processing will speed up rendering.
Indeed it does, but by how much its proponents never say, or it would not so
ardently be promoted.
. . .That is if rational representation is the only way to get exact
geometry.
. . .
Is it possible to do CSG with floats and get correct results?
. . .
unquote
The answer here is a flat no. It is not possible to get "exact geometry" if
you restrict yourself to floats. And again I must invoke the hated word
"lie" to explain what can be done.
First of all, the so-called "exact answer" is an approximation of reality,
as it implies infinite precision. And infinite precision is not attainable
by any instrumentation in the light of the Heisenberg uncertainty principle.
What Peter Hachenberger, one of the authors of CGAL, has shown in his
doctoral thesis is that CGAL is more robust towards mis-use than the
competition. What I cannot answer here is whether this statement holds for
all possible competitors, or only for those he, and I, have considered. The
lie here is to consider "exact answer" as a bijective mapping of the
computer model onto something that can be produced by a "perfect" printer.
My own work in identifying the causes of "degenerate triangles" has given me
hope that something like an "exact approximation" is possible, as it has led
me to understand the design errors that underpin CGAL, or, better said, the
design flaws arising out of joining CGAL into OpenSCAD. If my current
thinking pans out, I expect to obtain a ten thousandfold speed increase in
rendering objects. The price for this effort is steep, however: It demands
that current OpenSCAD developers are moved from coders (people who can
string together libraries) into programmers (people who consider and
implement the constraints and assumptions underlying those libraries).
I will conclude my remarks with a short quote from issue1258.stl, an
OpenSCAD generated .stl file:
facet normal -1 -4.4983e-08 -4.4983e-08
outer loop
vertex -0.249999 -29.6066 22.3934
vertex -0.25 -20 32
vertex -0.249999 -20 12.7868
endloop
This short excerpt says it quite clearly: whoever coded the .stl generator
had no clue about data accuracy, about what is realistic to report, and what
denotes sheer stupidity and ignorance. Otherwise, it would have read
facet normal -1 0 0
outer loop
vertex -0.25 -29.6066 22.3934
vertex -0.25 -20 32
vertex -0.25 -20 12.7868
endloop
Why? The coder decided to report 6 decimal places. This is quite
appropriate, if not excessive, for an item that is produced with final
dimensions to maybe two and, at best, within four decimal places. But then
he/she should have realized that, firstly, the largest dimension ends at
four places behind the decimal point. Thus, reporting other dimensions to
more places behind the decimal point is a form of bullshitting, of lulling
the user into false trust. There is simply no real difference between
-0.249999 and -0.25. But coding for the first output is less work, in
particular less mental work, than coding for the second output.
As far as lattices go, they can be constructed in OpenSCAD quickly and
rendered within seconds, if you do not involve CGAL at all in generating the
output .stl file, i.e. if you let polyhedron() do all the work. Both points
and faces can be generated with for loops without a problem, but doing so is
not a task for a newbie.
wolf
--
View this message in context: http://forum.openscad.org/Lattice-structure-tp21001p21060.html
Sent from the OpenSCAD mailing list archive at Nabble.com.