Jump to content

SharpEars

Registered Member
  • Posts

    196
  • Joined

  • Last visited

  • Days Won

    5

SharpEars last won the day on February 4 2023

SharpEars had the most liked content!

Profile Information

  • First Name
    SharpEars
  • Location
    Chicago

HW | SW Information

  • OS
    Windows
  • CPU
    i9-7980XE
  • GPU
    Nvidia 1080Ti

Recent Profile Visitors

1,436 profile views

SharpEars's Achievements

  1. I think that I understand what you are saying, but to allow for such a "special case of non-planar polygons," is to violate the concept of planarity and is very confusing to the user. If this is in fact the cause of such a quad being marked as non-planar by Cinema 4d's mesh checking, even though by all sensible (and mathematical) definitions the quad is in fact planar (FP error, aside, which I don't believe is present in this case and I am sure that a similar case can be created with all integral coordinates to guarantee no FP error, from a point coordinate perspective, at least), then Cinema 4D mesh-checking needs to be modified to take this "special" case into account and not mark it as non-planar. The fact that it is doing so is very misleading from the perspective of the user. Mesh-checking is used quite frequently to ensure that one's point layout and alignment is not causing quads to be non-planar (i.e., "rounded" due to the non-planarity of the triangles that they are subdivided into). To allow for an edge-case such as this only leads to confusion, unless there is an extremely good reason why such quads should be marked as "non-planar." I purposefully put this term into quotes in the preceding sentence, since it is being used in a very "off-label" (i.e., non-conventional and non-mathematical) way to as an indicator to the user for the presence of this edge case.
  2. Version info: Cinema 4D 2023.2.1 There appears to be some sort of bug in the polygon planarity detection of the Mesh Checking functionality. An example: The quad in question as part of a polygonal object: Coordinate Manager showing zero depth along the X dimension: The quad as displayed in an excerpt of the Structure Manager set to Polygons Mode: BTW, there are no N-Gons present in this object - all triangles and quads. The actual point positions of the quad's points via Structure Manager - Points Mode: Tests already performed: - There are no overlapping/coincident points - Optimize to a reasonable distance has been performed (no change) I even loosened the Mesh Checking settings for the Not Planar Polygons detection to 5 degrees to see if that makes any difference (or any other large angle I attempted, including: 0, 45, 90, 135 [why are angles >90 even permitted?] degs). It does not, the quad is still detected as non-planar: Here is a Python script that shows the coords of the polygon's points with more precision: Relevant subset of code if __name__ == '__main__': ob=doc.GetActiveObject() for point_idx in (3,26,27,43): point_coords=ob.GetPoint(point_idx) x=0.0000000001 print(f'point {point_idx:2n} coords with more precision: ' f'{{{point_coords.x:.15f}' f', {point_coords.y:.15f}' f', {point_coords.z:.15f}}}') Output point 3 coords with more precision: {0.000000000000000, 1.333333253860474, -0.666666626930237} point 26 coords with more precision: {0.000000000000000, 1.666666597127914, -0.333333283662796} point 27 coords with more precision: {0.000000000000000, 1.375962178533936, -0.322070503298034} point 43 coords with more precision: {0.000000000000000, 1.354647725510147, -0.494368489831828} This quad is co-planar with and, for this specific case, literally lies on the Y-Z plane. It has no no X depth and yet is deemed to be non-planar by the Mesh Checking functionality.
  3. This is easily testable via Python, thanks for the tip and I'll do the test and see what shakes out.
  4. Steps to easily reproduce: Create a Platonic Object with the following properties: Create a Subdivision Surface generator as its parent, as the hierarchy of the following image depicts and with the properties of the generator set as shown (n.b., the change of the Type property to OpenSubdiv Loop to create more geometry while maintaining a reasonable level of subdivision) : Select the Subdivision Surface object and collapse its entire object hierarchy into a single editable polygonal object (C keyboard shortcut), which I have renamed to Editable Subdivided Platonic in the forthcoming image showing the results. The resulting object is shown below from the front (i.e., via a Front View) and included are its point stats as show in Point Mode.: Switch back to Object Mode and bring up the Structure Manager as well as the Axis Center dialog box. Compare the placement location of the Object Axis when Points Center is checked vs unchecked in the Axis Center dialog, as well the ensuing changes to the object's (local) point coordinates in the Structure Manager. Once again here are the remaining settings for the Axis Center dialog. These should be left intact for the aforementioned test (except for Auto Update - your choice, depending on whether your prefer to manually update or not and of course toggling of the Points Center checkbox, a property that must be toggled in order to reproduce the bug being reported): Please let me know if this is a bug that can be readily fixed for the next release of Cinema 4D.
  5. Version info: Cinema 4D 2023.2.1 The Axis Center dialog can be used to center the Object Axis of a polygonal object (we'll assume no anomalous points, edges, or polygons are present in said object). In Model Mode, with the object selected, we bring up the Axis Center dialog and ensure that Action is set to Axis to and Center is specified as All Points. Include Children, Use All Objects, and Viewport Update (not relevant) are unchecked for the sake of discussion. Auto Update is checked, for convenience, so we can see the effect of our choice, immediately and without having to click on the Execute button at the bottom of the dialog. We will compare the following two sets of dialog settings and accompanying scenarios: Scenario #1 - Use the Points Center override to fix all of the X/Y/Z relative positioning percentage values to zero The Points Center checkbox is checked, thus disabling the ability to make any changes to the X/Y/Z percentages and making said percentages inapplicable, resulting in the following set of settings for the dialog: vs Scenario #2 - Use X/Y/Z percentage values to define relative positioning among the points The Points Center checkbox is not checked, and the X/Y/Z percentages are all set to 0%: Now, one would expect that these two possibilities would produce identical outcomes, but this appears to not be the case and this, in my opinion is a bug. There is a slight difference in the placement of the Object Axis with the placement for Scenario #1 (i.e., using the Points Center checkbox) being the one that is in error. For my scenario, I have a fairly complex polygonal solid shape with 892 points, 1897 edges, and 969 polygons. It should be possible to reproduce this behavior with a far more simple shape, but I will enclose an excerpt image showing the Object Axis placement for the two scenarios from the perspective of the Front View to show that they do in fact differ. In any case, I will show additional screen shots, the first being the position that results from the settings of Scenario #1: Points Center: Next is the position that results from the settings of Scenario #2: Explicit X/Y/Z offsets of 0%: And to make the difference more visible, an overlaid version of both scenarios: If you look at any of the depictions of the three axis arrows (X. Y, or Z) in the overlaid scenario, you will see a small offset along the X axis, which appears as a blur, and a still smaller offset along Y. I will show the values of one of the first points of the object for both scenarios, so that the numeric difference can be presented empirically: Scenario #1 - incorrect axis placement - Point 0 coords post axis centering operation: -4.777, -35.7576, -0.0137 Scenario #2 - correct axis placement - Point 0 coords post axis centering operation: -4.75, -35.75, 0 Characterization of Differences The following are the (absolute value) percentage errors in the X, Y, and Z directions. Each of these represents the error in offset along each of the three Euclidian axes as a percentage of the object's corresponding size (i.e., width, height, and depth, respectively) along that direction: X=0.257%, Y=0.007%, Z=0.130% In my opinion, the difference is too large to be attributed to mere Floating Point Error, especially within the context of the scale used for the project: Project Scale: 1 cm Effective Scale: 1.000 x (No scaling) Object size (X x Y x Z): 10.5 cm x 108.5 cm x 10.5 cm Grid spacing shown in the images above: 0.5 cm Both Display and Project units are: cm
  6. There's a free preview of the first 100 pages of the book over at google books: https://books.google.com/books/about/Maxon_Cinema_4D_2023_Modeling_Essentials.html?id=MOuxEAAAQBAJ&printsec=frontcover&source=kp_read_button&hl=en&newbks=1&newbks_redir=0&gboemv=1&ovdme=1&ov2=1#v=onepage&q&f=false ..., so you can make at least some sort of assessment of its quality/content.
  7. Here's a thought: Layer enhancements allowing for the assignment of multiple layers to an object with overrides and perhaps some notion of property inheritance between ancestor/descendent layers.
  8. Here is another image that shows how Cinema 4D calculates the normal for a non-planar quad, with a detailed description of the math and elements shown appearing right below it: The violet and magenta line segments are simple Segment Guides that I added to help represent the diagonals of this non-planar quad. In terms of calculations, the ▲abc Normal is averaged with the ▲cda Normal to arrive at the final normal being sought, one that matches Cinema 4D's calculations for the quad "polygon," and is represented in the image by the ▰ abcd Normal. If you look closely, you can even see the yellow Cinema 4D polygon normal protruding from the arrow-tip of this blue (calculated) average normal. I hope that this picture explains it well. Each of the two triangle normals' directions were arrived at by calculating the normalized cross-products of two edge vectors from the quad, with: - ab x bc used for the ▲abc Normal, and - cd x da used for the ▲cda Normal With the directions of the normals calculated, the yellow illustrative arrows representing them in the image were positioned so that they each originate from the centroid (aka barycenter) of their respective triangles, as one is normally accustomed to seeing them. The component triangle normal vectors were then used in the following calculations: 1. Find the average of the two triangle normals using the formulat: (▲abc Normal + ▲cda Normal) / 2 2. Normalize the resulting vector to be a Unit Vector 3. Show the final outcome as an arrow labeled with: ▰ abcd Normal The final normal's arrow is positioned so that it starts at the center of the quad, defined as the average of its four points' coordinates (the points are labeled in the image with a, b, c, and d). This is identical to how Cinema 4D calculate the quad's center, as you can see from the polygon normal's start point in the image (and also happens to be the World origin, as well as the quad's axis as I positioned it).
  9. I just want to mention that you have to be very careful with these sorts of scripts when dealing with non-planar quad polygons that may be present and selected. For example, consider the following view through a Parallel Camera (to minimize perspective distortion) being used as the Viewport camera: The magenta diagonal line is a guide that goes from Point 0 to Point 3, representing how this non-planar quad is presently being triangulated by Cinema 4D. The yellow line segment emanating from the center (actually through the center, sort of, since it starts quite a ways below the surface of the quad at what Cinema 4D considers to be the center point of this quad) represents what Cinema considers to be the polygon's normal, with Polygon Normals enabled for display in the Viewport (and shown via Preferences at a 400% scale, so that they are long enough to be "legible"). Similar yellow line segments going through the corner points of the polygon, added in via creative use of a Matrix object (and the little originally white cubes at the vertices that it thinks it is aligning!), overlay a copy of the polygon normal at each point, so that it can be compared with the light gray vertex normals coming up from each point, as rendered by Cinema 4D, since Vertex Normals are also enabled for the Viewport. In addition to all of the above, the point indices are also displayed to label the individual vertices, courtesy of the aforementioned Matrix object. I should also note that there is a slight discrepancy between the direction of the Z Modeling Axis and the quad normal, as displayed. Active tool info: Move Tool in Polygon mode with the Modeling Axis set to Selected/Axis and all else default. This is not a figment of your imagination and this is noticeable regardless of whether the Viewport is using a Perspective camera or a Parallel camera. They are truly pointing in slightly different directions and I don't know which of the two is "correct." Here are the point coordinates making up the sole Polygon of the polygonal object (uncreatively) named nonplanar_quad, as shown via the Structure Manager: Finally, here is what the Python script, quoted from the above post, produces as the polygon's normal for the Polygon comprising our nonplanar_quad object, along with additional data describing the point properties of the object and its sole polygon, as retrieved via the usual Cinema 4D Python API member functions called on the object: The resulting normalized unit-vector (last line of output, above) lacks a Z directional component, even though Cinema's interpretation (via those yellow normal line segments and their leftward tilt in the view) is clearly showing that Cinema 4D thinks a Z component should be present in the polygon normal. The cause of the difference is that the quoted script only uses Polygon points a, b, and c, to calculate the cross-product of rays ba and cb as the polygon's normal. For this non-planar case, polygon point d is also playing a role in what Cinema 4D considers to be the polygon's "true" normal. Of course we are not dealing with a quadrangle or polygon in the mathematical sense here, since it is non-planar, but it can be argued that the (averaged) normal for this non-planar quad should take into account the individual normals of both of the planar triangles from which it is constructed.
  10. The short answer is, "yes." Many gemstones have complex IOR values. In particular, gemstones with a single IOR>=1.65 have highly complex refractive IOR values that differ for different wavelengths, viewing angles, changes in density based on impurities in the stone, etc. In fact, the single real-world dispersion value, which takes complex IORs at different wavelengths into account, is often touted about (e.g., Abbe value) is just an approximation either based on a single wavelength of light, usually tied to a particular Fraunhofer line (e.g., nD based on the Sodium-D line), or a chart across two such lines, referred to as the the differences in IOR between two pre-defined Fraunhofer lines (e.g., B and G lines) at their respective wavelengths. For your specific case of blood, you can see a graph of the refractive index (eta) and extinction coefficient (kappa) at various wavelengths, at: https://refractiveindex.info/?shelf=other&book=blood&page=Rowe ..., a good site for getting complex IOR curves for various materials.
  11. There is no direct support, but you can do this in XPresso using the technique illustrated in the following image: Image Legend Layer Manager followed by Object Manager (partial screencaps) XPresso Editor layout graph showing all nodes and connections The Set Layer By Name is a Python XPresso Node, reachable via New Node -> XPresso -> Script -> Python. You can pop out and use the Expression Editor to enter the code for it, which I will provide, after creating the node in XPresso, resulting in the forthcoming image. Note: I have placed the actual code in text form, so that you can easily copy it into the Expression Editor, below its graphic rendition. from typing import Optional import c4d doc: c4d.documents.BaseDocument # The document evaluating this node # This code is not meant to perform the action described in absolutely the most efficient # way possible (which would potentially replace recursion with iteration and add complexity), # but it is sufficient, commented, and illustrative with regard to how this could be # implemented in a Python XPresso Node. # Perform a recursive breadth first search for a Layer whose name matches the name # provided as the input to the XPresso Node (i.e., LayerName) # Returns: The first (name) matching layer found or None if no layer has the specified name def find_layer(cur_layer: c4d.documents.LayerObject) -> Optional[c4d.documents.LayerObject]: global LayerName # Node input # Check for a name match with the current layer cur_layer_name=cur_layer.GetName() # <Uncomment to debug> print(f'Layer being tested for equality to "{LayerName}": {cur_layer_name}') if cur_layer_name == LayerName: # <Uncomment to debug> print(f'Matching layer found') return cur_layer # If no match, test its sibling layers (if any) cur_sibling=cur_layer.GetNext() while cur_sibling: if layer_found:=find_layer(cur_sibling): return layer_found cur_sibling=cur_sibling.GetNext() # If no match, test its child layers (if any) cur_child=cur_layer.GetDown() while cur_child: if layer_found:=find_layer(cur_child): return layer_found cur_child=cur_child.GetNext() return None def main() -> None: global doc # Document containing this node global Layer # Node output starting_layer=doc.GetLayerObjectRoot().GetFirst() Layer=find_layer(starting_layer) # <Uncomment to debug> # if not Layer: # print(f'A matching layer was not found, for layer name: {LayerName}') A side note for the color purists of the forum. Feel free to skip the following mumbo-jumbo entirely, if you are among those that "just wing it" in the identification and choice of color department. I know that my choice of pink looks like I mistook the color magenta for pink and in fact, being myself a "proud card carrying member" of the fictional Worldwide Association of Self-Proclaimed Color Purists (WASCP), I had the same lingering thought after reviewing this message subsequent to its submission. Any similarity of my pink to magenta is just an unfortunate side-effect of converting through multiple color spaces to arrive at the common denominator of sRGB for a post to this forum. Here is a larger swatch of the Pink from the image above compared to a similarly sized swatch of true Magenta (depicted at the same brightness level). Depending on the color gamut capability of the viewing device you are using to evaluate the following (sRGB color space) comparative image of the two colors , especially if said device is not both capable (at least 100% sRGB) and calibrated (ΔE≤2), any differences between the Pink (left rectangle) and Magenta (right rectangle) may be both subtle and subjective in nature: A really tangential side note For anyone interested, it appears that the forum software auto-converts any .PNG images with an embedded P3 color space to sRGB. Before posting images, I would strongly advise you to perform the conversion yourself, with your (or your conversion software's) own idiosyncratic notions of how such a conversion should be performed - a dish best served cold and with a heap of salt to mask its bitter taste. This advice is especially important if the image happens to contain any colors that are outside of the destination (i.e., sRGB) color space. Having said that, here is an image with an embedded P3 profile and you can judge how the Forum software converts it to sRGB for yourself, because it does appear to be (even more) subtly different when compared to the preceding image of the two swatches. The differences are more apparent for the initially far more saturated right-hand magenta swatch, since the P3->sRGB conversion process being used by the Forum software is likely very different from the one I used to perform the conversion. This Forum auto-conversion resulted in a rendition of the magenta swatch that is significantly brighter and a bit less saturated, which may now be more apparent if you compare the two sets once again, now that I pointed this difference out, and I will once again remind you that whatever differences you see, if any, are highly dependent on the state and capability of your viewing device, because the differences are quite subtle. If you are interested in the details that are considered and tradeoffs that are made when selecting and using one of the more common color space conversion approaches, I would highly recommend the following layman-level article that explains the process of conversion and the various approaches that are commonly employed: Cambridge in Color - COLOR SPACE CONVERSION.
  12. Yes, some sort of formula compiler would definitely come in handy and help cut down on "math-op node bloat."
  13. There is answer need no question that to makes sense no, thanks for Google Translate cooperation please you post. ..., in an effort to get the point we are making across to you, prior to just "plonking" you outright with an ignore. Do you know what's worse than a completely incoherent non-native English speaker who can at least admit to being at that level? A completely incoherent non-native English speaker who thinks that their incoherency is the fault of (all of the) listeners, rather than in their inability to form coherent self-consistent English phrases and meaningful unambiguous sentences. Here is a link to a poem that you should definitely read - it very much applies to you. Maybe after reading it, you will come to appreciate the point we are trying to get across and act accordingly: https://stihi.ru/2018/02/28/1470
  14. Perhaps unfortunately, but I do not have a blog or other site to put this info on. I will see if the Admins are willing to allow for some sort of pinning, tagging, or other mechanism, to be able to get tutorials into a place where they are readily available (and visible, regardless of the age of the posts), going forward.
  15. Let me make a constructive suggestion: Please use Google translate to translate from from whatever native language you speak to English. Your English is extremely difficult to follow and that is not a good starting point for getting an answer to your question. Having to undertake the mental load of trying to understand your English is way more than any of us have the patience for. I am just trying to be helpful, even though statements like: "You just don`t know an answer.", are an assumption on your part, because I can practically guarantee that people on this forum definitely do know "an" answer to most of the questions that get asked, yours included.
×
×
  • Create New...

Copyright Core 4D © 2023 Powered by Invision Community