Jump to content

Fritz

Maxon
  • Posts

    192
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by Fritz

  1. Sorry, I can help with everything VF related, but only watched a few tutorials for xpshatter and never used it. 😕 Insydium promotes a discord server on their page, maybe that extends the range to find an XP specialist for help.
  2. Hi Tread, I am not sure about that, but I have the suspicion it uses voronoi fracture under the hood. What I can say is that it has a better workflow for multiple fracture layers. (Fracture the fracture etc.) It's kind of possible to set up with multiple voronoi fracture generators, but that can be complicated. Let's say they are very similar with xpshatter having more a VFX workflow focus, while voronoi fracture has a mograph focus. Regards Fritz
  3. Hi Dalizade, the GetActiveObject function returns only the selected objects. By default it does not return the chilldren if a parent object is also selected. With doc.GetActiveObjects(c4d.GETACTIVEOBJECTFLAGS_CHILDREN) you can turn that off to have also the selected children. This does not mean it returns unselected children. To do that you could write a little helper function like in the attached script. That returns a list of all children for a parent (excluding the parent) getchildren.py
  4. Wow, what a mistake to give it another try to communicate to you after the first encounter was just the worst experience. Either there is a language barrier (sorry not native English) or you are intentionally trying to read negatives into my responses. I honestly tried to understand why the rest length is so "crucial" and did instantly recognize that it's useful. And on the curly option. It honestly doesn't get easier than a checkbox and it comes with the same set of conditions as it does in vellum, that I tried to help you with, to get a good first experience with the feature. I don't think you are trying to help and your social media rants are borderline slander. You burned all bridges, good bye.
  5. I will not discuss possible development plans here, so no comment on what feature will or will not be worked on. I find a twitter screenshot a bit lacking in information why you need rest length control to do that image. The rope twist constraint is a single checkbox and you get the constraint added for the same effect as the hair shelve tool in Houdini. Don't understand what's not easy about it. Just gave you further information, which also applies to vellum.
  6. Hi Thanulee, Thanks for the feedback. - For caching I suggest using alembic for the moment. - I don't disagree on rest length control, but could you explain how is it most crucial for cloth? - nearly every parameter can already be controlled by a map (unfold the parameters > ). Pressure is a volume constraint thus not controllable by a map. You can control the stretchiness of the surface instead if you want areas to stretch by internal pressure. - twist constraints in rope can be activated with the "curly" option. This constraint however quickly feels like it does nothing with many rope segment just having tiny error in the solve, so you either need very high iterations or just lower resolution lines (reduce spline subdivision). Regards Fritz
  7. @thanulee I don't want to argue on the internet. I just feel like you are a bit over-exaggerating the gravity of this. Every single one of your twitter post complains about this. You said you called support multiple times and you post here only about this. I gave you context and a workaround. I get very similar results to the old cloth with higher bendiness, 100x scale and 1/10th strength. At least one of your support calls triggered a ticket for the issue and it has been set to fixed. I didn't say you were impolite, but you seem to be very invested into this comparibly small issue for days and announce to continue to do so. I honestly don't think that is the best use of your time, but feel free to do so. Have a great day 🙂 Fritz
  8. @Thanulee Are you really planning to be mad for a week over this little thing? I never said it is perfectly fine, so please don't put words in my mouth on twitter. I stated the reasoning behind the change of strength parameters, but in this case the scale is the bigger offender, which should be 100 times larger. With "Not Broken" I meant that it does work, just a little differently. You made it sound like it does nothing. Not all forces, settings and modes are supported as this is not using the old forces code and all forces needed to be rewritten or similarly written for a GPU programming language. Note also, that the default cloth settings are a lot stiffer. If you have small scale turbulence poking at a tiny location for a short moment on a stiff cloth, then that won't move the cloth a lot. Now do that everywhere from all sort of directions and of course it's gonna be pinned in place. The same thing happens to the old cloth if you reduce the scale by 100, even with a less rigid cloth.
  9. Hi Thanulee, the turbulence is unit-less, so saying it "works" in bullet as expected is a strange statement. Is it optimal, that they interpret the value differently? No. Is anything broken? No. This was a decision to kind of give it a unit and interpret the value as something physical to maybe at some point raise the default and have it more consistent. At the moment, just raise the value. If you could specify the "Glitches", then I could tell you which setting counteracts that. Regards Fritz
  10. If you have scenes, that feel unnecessary slow, DM me the scene if you can share it. Most of the times it's one of these: - feedback loop setups that cinema does allow, but trigger a recalculation with every click. - Bugs that falsely report that constantly something changed and things are recalculated. - improper use of cloner without multiinstance - some code that just doesn't scale well and is executed in an extreme way - some character rigs updating the same joint orientation/position through multiple sources (xpresso + IK i.e.) and creating constant reporting of changes with every click in cinema. Regards Fritz
  11. Try selecting all polygons before pressing the button.
  12. Skin deformer is already parallelized, but with rather "simple" tasks like transforming a point by a few bones, this doesn't make a big difference and can sometimes make it slower than single threaded. Tests showed that the GPU update and upload of the new version of the animated polygon object was a much slower task than the skin deformer itself. If you have example scenes that you can share through dm, I could profile them to see where the time is really spent.
  13. Run an optimize before the bevel. (U~O shortcut). There seems to be a duplicate point on the corner. Optimize welds them together.
  14. If you have 25.115 installed and it works for you, it's absolutely okay to keep using it. The effector bug is fixed. You will notice, if the version doesn't work for you.
  15. Hi Shrike What you are describing are the Capsules. Have you tried to drag in the "Object Group" ? That is an empty, do your own, node-graph as Generator Object in the classic scene. Same goes for geometry modifier group being a nodes deformer. Also: did you try "C" or double click in the nodes editor ? Regards Fritz
  16. Hi Sandidolsak, Even in classic, you can hardly know what object goes into the deformation function of a different deformer. Deformers are a stack and one object is pushed through them. If the UVtoMesh is not the first deformer in that stack, a different one might have already changed the initial object. There is pretty much no way to intercept that object in classic, other than creating a copy in the UVtoMesh and writing it out somehow for the MeshToUV to read (afaik thats how the Welter plugin worked). Coming to a hypothetical python node: EVERYTHING in the nodes area is new. Referencing anything in classic, as well as the capsule intergration, requires for old -> new conversions. So referencing classic and the OM would not be as simple and I would assume better forbidden to prevent chaos. But again, all just hypothetical. Regards Fritz
  17. The python node will still have to reference the original object somehow, before UV to Mesh. Not really a functional problem of the nodes, but a conceptual one of the capsules. You just cannot inject operations inbetween, and they themselves are encapsulated per capsule (thus the name :D).
  18. can't think of a way to do that 😕.
  19. Here is a UV morph. Much more complicated. Press play uvmorph.c4d
  20. Here is an example of using UV to Mesh, then sampling a noise at the 3d position, displacing the points and then setting them again as UV. All packed in an deformer like capsule with exposed parameters, ready to be used in a project near you. uvdeform.c4d
  21. Hi Sandidolsak, UV to mesh is an asset and an very simple one at the same time. If you drag it to the nodes editor, select it, somewhere in the node editor menu is "convert asset to group" or something like that. Sorry, don't have it in front of me. You can then inspect how uv to mesh works and maybe invert that logic. Note that the topology you set needs to keep the same polygon count. Regards Fritz
  22. Hi Soronder, yes that is how it works. Looks like an update issue. Try changing the subdivision of the cube.
  23. Fritz

    R25 Expectations

    Hi jVjVjV, I would love to get an example scene in a PM that I could have a look at. In my tests it was the same at the time OpenVDB was integrated. If you have a simple but clear example of the difference it might be a good starting point to find a solution if there is an issue. Regards Fritz
  24. It is, because Photoshop... And.. reasons.. 😄
×
×
  • Create New...

Copyright Core 4D © 2024 Powered by Invision Community