Jump to content

Leaderboard

  1. dast

    dast

    Registered Member


    • Points

      5

    • Posts

      1,276


  2. Rectro

    Rectro

    Community Staff


    • Points

      4

    • Posts

      3,798


  3. zipit

    zipit

    Limited Member


    • Points

      4

    • Posts

      15


  4. Cairyn

    Cairyn

    Developer


    • Points

      2

    • Posts

      800


Popular Content

Showing content with the highest reputation on 03/21/2021 in all areas

  1. There you go! Version 0.1 of the SearchMaterial plugin is available in the download section.
    3 points
  2. Here is a video I have made that should take you top the next stage. I can see you have been working hard on this so well done on your progress. This is 1080p so you may need for Youtube to process high res if its not showing yet
    3 points
  3. Hi @bjlotus, hi @Cairyn, there is one problem with your normals calculation. You calculate just a vertex normal in the polygon and then declare it to be the polygon normal πŸ˜‰ (your variable "cross"), i.e., you assume all your polygons to be coplanar. With today's high density meshes you can probably get away with that to a certain degree, but it would not hurt to actually calculate the mean vertex normal of a polygon, a.k.a., the polygon normal to have a precise result. The problem with your "flattening" is that it is not one. Unless I have overread something here in the thread, the missing keyword would be a point-plane projection. You just translate all selected points by a fixed amount, which was if I understood that correctly only a work in progress, but obviously won't work. Things could also be written a bit more tidely and compact in a pythonic fashion, but that has very little impact on the performance and is mostly me being nitpicky πŸ˜‰ I did attach a version of how I would do this at the end (there are of course many ways to do this, but maybe it will help you). Cheers, Ferdinand """'Flattens' the active polygon selection of a PolygonObject. Projects the points which are part of the active polygon selection into the mean plane of the polygon selection. """ import c4d def GetMean(collection): """Returns the mean value of collection. In Python 3.4+ we could also use statistics.mean() instead. Args: collection (iterable): An iterable of types that support addition, whose product supports multiplication. Returns: any: The mean value of collection. """ return sum(collection) * (1. / len(collection)) def GetPolygonNormal(cpoly, points): """Returns the mean of all vertex normals of a polygon. You could also use PolygonObject.CreatePhongNormals, in case you expect to always have a phong tag present and want to respect phong breaks. Args: cpoly (c4d.Cpolygon): A polygon. points (list[c4d.vector]): All the points of the object. Returns: c4d.Vector: The polygon normal. """ # The points in question. a, b, c, d = (points[cpoly.a], points[cpoly.b], points[cpoly.c], points[cpoly.d]) points = [a, b, c] if c == d else [a, b, c, d] step = len(points) - 1 # We now could do some mathematical gymnastics to figure out just two # vertices we want to use to compute the normal of the two triangles in # the quad. But this would not only be harder to read, but also most # likely slower. So we are going to be 'lazy' and just compute all vertex # normals in the polygon and then compute the mean value for them. normals = [] for i in range(step + 1): o = points[i - 1] if i > 0 else points[step] p = points[i] q = points[i + 1] if i < step else points[0] # The modulo operator is the cross product. normals.append(((p - q)) % (p - o)) # Return the normalized (with the inversion operator) mean of them. return ~GetMean(normals) def ProjectOnPlane(p, q, normal): """Projects p into the plane defined by q and normal. Args: p (c4d.Vector): The point to project. q (c4d.Vector): A point in the plane. normal (c4d.Vector): The normal of the plane (expected to be a unit vector). Returns: c4d.Vector: The projected point. """ # The distance from p to the plane. distance = (p - q) * normal # Return p minus that distance. return p - normal * distance def FlattenPolygonSelection(node): """'Flattens' the active polygon selection of a PolygonObject. Projects the points which are part of the active polygon selection into the mean plane of the polygon selection. Args: node (c4d.PolygonObject): The polygon node. Returns: bool: If the operation has been carried out or not. Raises: TypeError: When node is not a c4d.PolygonObject. """ if not isinstance(op, c4d.PolygonObject): raise TypeError("Expected a PolygonObject for 'node'.") # Get the point, polygons and polygon selection of the node. points = node.GetAllPoints() polygons = node.GetAllPolygons() polygonCount = len(polygons) baseSelect = node.GetPolygonS() # This is a list of booleans, e.g., for a PolygonObject with three # polygons and the first and third polygon being selected, it would be # [True, False, True]. polygonSelection = baseSelect.GetAll(polygonCount) # The selected polygons and the points which are part of these polygons. selectedPolygonIds = [i for i, v in enumerate(polygonSelection) if v] selectedPolygons = [polygons[i] for i in selectedPolygonIds] selectedPointIds = list({p for cpoly in selectedPolygons for p in [cpoly.a, cpoly.b, cpoly.c, cpoly.d]}) selectedPoints = [points[i] for i in selectedPointIds] # There is nothing to do for us here. if not polygonCount or not selectedPolygons: return False # The polygon normals, the mean normal and the mean point. The mean point # and the mean normal define the plane we have to project into. Your # image implied picking the bottom plane of the bounding box of the # selected vertices as the origin of the plane, you would have to do that # yourself. Not that hard to do, but I wanted to keep things short ;) polygonNormals = [GetPolygonNormal(polygons[pid], points) for pid in selectedPolygonIds] meanNormal = ~GetMean(polygonNormals) meanPoint = GetMean(selectedPoints) # Project all the selected points. for pid in selectedPointIds: points[pid] = ProjectOnPlane(points[pid], meanPoint, meanNormal) # Create an undo, write the points back into the polygon node and tell # it that we did modify it. doc.StartUndo() doc.AddUndo(c4d.UNDOTYPE_CHANGE, node) node.SetAllPoints(points) doc.EndUndo() node.Message(c4d.MSG_UPDATE) # Things went without any major hiccups :) return True def main(): """Entry point. """ if FlattenPolygonSelection(op): c4d.EventAdd() if __name__ == '__main__': main()
    2 points
  4. After being frustrated i could not use UDIM UV maps in C4D i found out that I actualy can with Redshift. It's simple so I made a note to self not to forget.
    1 point
  5. Hi @bjlotus, This was not me be pedantic about terminology and you are computing the mean and the normals. I was just pointing out that you were just computing the normal for one of the planes in a polygon - unless I am overlooking something here. But for quads there are two, one for each triangle in the quad. Below is an illustration of a quad which I did manually triangulate for illustration purposes. You can see the vertex normals in white (which are each equal to the normal of the triangle/plane the lie in). And two plane normals in black, one for each triangle. The actual polygon normal is then the arithmetic mean of both plane normals. So when you just pick one vertex in a quad to calculate the polygon normal, i.e., simply say one of the white normals is the polygon normal, then you assume that both tris of the quad lie in the same plane, i.e., that the quad is coplanar. Which of course can happen, but there is no guarantee. Quads are today usually quite close to being coplanar (and not that comically deformed as my example), but you will still introduce a little bit of imprecision by assuming all quads to be coplanar. Cheers, Ferdinand
    1 point
  6. I apologize for posting into a thread, where I certainly do not belong as I'm not into sculpting at all. Please also forgive my language... but holy sh**!!! I actually just wanted to take a quick peek and then I had to watch @Rectro's video to the full extend of almost 80 minutes. I enjoyed it so, so much. Absolutely gorgeous. I couldn't see any way to express my respect and gratitude by just the click of a like button. This is so much beyond the usual help or answer to be expected in a forum. I am awe struck. I mean, it's not only displaying a level of mastery, but features also a very calm and comprehensible way of explanation. Really top notch. Thanks for providing us a glimpse at your profession, even if (and I'm pretty sure about this) it was only a peek at the surface of what I imagine to be a much vaster and deeper knowledge of the topic. Which makes the ease of explanation even more impressive, as it is certainly not easy to boil down such knowledge into first steps, making these look easy and light. You really made my day! Thanks so much. @Rectro If you should ever feel the need for any scripting/automation work, please consider contacting me, I'd like to "pay" something back.
    1 point
  7. Thanks @zipit for posting solution. Love to see part of code very well described how all that things go πŸ™‚ ...
    1 point
  8. Hi @Cairyn, hm, I did not want to imply otherwise. I was just polite and said hi to everyone before waltzing in here πŸ˜‰. Assuming my "hi @Cairyn" was the cause for that misunderstanding. Cheers, Ferdinand
    1 point
  9. Another 'bounus tip' if you want best quality glass in Redshift: Lights and HDRI domes > Ray : Affected by refraction: always Material > Optimisations : Cull Dim Internal Reflections : off Material > Advanced : shadow opacity may need to be increased. Render settings > Optimization : Reflection/Refraction/Combined: 16
    1 point
  10. I personally haven't had the need (yet), but I understand your request. Let's see if I can come up with a quick plugin solution to help you out, if you are interested? I am not sure this will be as integrated into the UI as is the native ObjectManager search ... let's see how far I can take it.
    1 point
  11. 😁 Indeed. Zbrush is the one 3d app that I have never been able to get along with. But I do appreciate its functionality.
    1 point
  12. (just to avoid misunderstandings, I have nothing to do with that code and did not investigate it in detail. No time for unpaid extra work these days. But I am sure @bjlotus will appreciate your corrections, so thanks!)
    1 point
  13. This might maybe not be useful to you, but what I do is use spheres as eyeballs, and slightly larger hemispheres as eyelids. A separate one per eyelid. I then use an FFD to model the eyeball to the necessary shape. And tuck the eyeball, eyelids and the FFD in a null. EyeLid - anim.mp4 with FFD disabled it looks like this: EyeLid - anim (no FFD).mp4 EDIT: Forgot to mention the most important part ... With the above setup you simply animate the rotation of the eyelids. No morphing, no bones.
    1 point
  14. Hmm, I'm not sure how I can help you here (except programming the stuff for you, which I'm not gonna do as I don't have the time). From glancing at your code, I see that you know the basics already, including the cross product, so I gather you know the math behind it. If you want to go on piecewise, you may want to think about your phrase "where the selection starts", which is a fairly difficult thing to find. Usually the center of the plane that you flatten against would be the average of the points involved, as that is easy to find. If you want anything else, you need to take into account that your object may be rotated arbitrarily, and that the points that you want to flatten may form some rotated structure within the object as well. So, assuming you want to define "where the selection starts" through a bounding box, that bounding box may be axis parallel to the world, axis parallel to the object's local system, or it may be an optimal bounding box for the selected points (a BB with minimal volume, which is not easy to calculate). Or, actually, any other box containing your points. You could use a bounding box whose direction is defined by the average normal. But there is no guarantee that the BB's lower boundary actually equals the red line in your screenshot - that is totally a consequence of your symmetric object and symmetric point selection. ...Delay the decision for a while and think about the transformation first. Use the average point center as zero coordinate for your flatten-plane and create the flatten-plane from that and the average normal. That way, you have at least a coordinate system to work with. Then, make a sample scene that includes some asymmetries and rotations so you are forced to set up the transformation correctly. Continue as described above. Once you have the flattening basically done, you can think about the bounding box and the placement of the flatten-plane again. (Low hanging fruits, and stuff... πŸ˜‰ )
    1 point
  15. @hvanderwegenNice one for showing your model. Drawing is a great way to articulate anatomy as you have to pay more attention to 3D forms captured in 2D space. When I signed up with Scott Eaton Anatomy course that's just what we did, drawing ecorche, it was quite intense but worth it. A great book that's very visual that I wish I had back in 2010 is Anatomy for Sculptors. before that I had Eliot Gold finger Anatomy for artists, both very good books. Wish I had kept on this with the same intensity I had as Id be much closer to where I aimed to be. Dan
    1 point
  16. My opinion about this (and many artists share this opinion): without a good understanding of anatomy of humans, animals, and such, it will be exponentially more difficult to create convincing models or sculpts of fictional characters and stylized characters. Unless you understand how everything 'relates' in reality, abstracting reality is just not easily possible, if not impossible - because without actually KNOWING how stuff works, it is impossible to reference it adequately. Simple as that. This is why learning to draw the human figure (and various animals) is also recommended for artists interested in learning to sculpt characters: it helps understanding form and shape by observing. I recall my first ever 3d head model (which I modeled in C4d 6 (no sculpting tools at that time!) over two decades ago! ☺️ I referenced photos, and all the references I could get my hands on, and STILL I got many things very, very, VERY wrong! 🀭 But as a first 3d study of a human head it did pay off, and my subsequent models improved over time. Same with the first hand model and the first animal models that I did, and so on. Learning more about anatomy was super useful - not only for more realistic stuff, but just as much for fictional and stylized characters. Humans just 'know' when they are seeing something that isn't quite 'right'. Not to say that there aren't artists who intuitively grasp all this stuff and through practice deliver beautiful 3d characters even without knowing how to draw a simple figure πŸ˜› But I am not one of them. I had to practice and learn about anatomy, shapes, movement, etcetera. Just my two cents.
    1 point
  17. Long term max and c4d user here. I prefer max's modelling tools and modifier stack, and viewport performance, and plugins like tyflow. And I like c4d for its UI cleanliness (apart from the uv workflow which was always terrible) and for mograph. The vanilla renderer is pretty useless so you need a third party one, whereas max has Arnold built in these days. Never mind all that though, the one thing max has had since the 90s (and AE for that matter) which I really wish they would put in C4d is one simple render checkbox : Skip existing files! You can render files in a range that maybe are missing, corrupted, or render a few test ranges, to be filled in later etc. It's so useful. And we're up to version 23 and it's still not in c4d.
    1 point
Γ—
Γ—
  • Create New...

Copyright Core 4D Β© 2023 Powered by Invision Community