Jump to content

Fritz

Maxon
  • Posts

    192
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by Fritz

  1. Save result to file just saves the current result (of the frame you are at) to the file, so that loading the file doesn't need to recalculate. Not related to animation caching. I would suspect the sound field to be messed up in this case. Did you try replacing VF with cloner? Sometimes caches are messed up because some setting has other "render time" values, like SDS subdivision.
  2. Or you open your new.c4d reset the pyro settings to default and save again. It works properly, it just has different defaults.
  3. Build in redshift is a full version. By default cinema4d includes a CPU license. If you have a license for GPU it should also work with that version. If there is a newer redshift version or plugin it will be installed separately and then used if found. Redshift release cycle is not in sync with cinema4d releases so this allows new redshift features outside of a cinema4d release.
  4. Hi EALEXANDER, this is unrelated to the simulation I'd say. It appears corona doesn't handle this well. Unfortunately I am not familiar with corona; maybe there are some settings to have a better result. In any case that is where you need to search for solutions. Check if maybe some of the lights are inside die bounding box of the volume. That is often a source of such problems.
  5. Search in the Assetbrowser for "subdivision". Drag that in the object manager. This is a deformer that does the same as SDS and has a selection string field. Create a polygon selection on the mesh you want to subdivide and drag it into the selection string. The result should be the name of the tag in quotes.
  6. If the render plugin supports cinema volume objects/volume builder and it's not too much hard coded, it should work out of the box. Worst case you can always export everything as .VDB either through caching or volume export and load that in the volume loading objects that pretty much every third party renderer has. I tested none of this, but there is nothing that stops the 3rd party devs from supporting it. The grid names if you have to set them up are like the channels in the pyro object just lower case.
  7. If the viewport preview doesn't look as good as embergen it must be shit 👍😊
  8. They call it stiffness, in cinema it's called bendiness and stretchiness. Independent from mesh, just inverted name and even clearer.
  9. This is not about a valid or not valid point. You are using vellum as example that also is an XPBD system. It is exactly the same with all the same conditions. There are even parts in their help talking about exactly that. You are just trying to tell a developer of the system that he has no clue what he is saying, because you don't like the answer, even though that answer was trying to help you.
  10. Turn on deform editing for the viewport or deselect the object (you can select the tag). The simulation is a deformation since R26 instead of overriding the original. If you want to follow an old tutorial using old cloth> basic tab of the cloth tag > legacy solver > check
  11. It's not a "developer approach". It's the state of the art research in XPBD. It has nothing to do with perspective or design.
  12. Depends. The cloth itself, yes. But the deleting of the polygons is a postprocess done by the cloth generator. It detects that there is a cloth as inputs and takes the data out. That is of course not available in an alembic. Exporting the cloth generator output will have the same problem. Regardless, the tearing in the old cloth looks terrible, and in general it is so slow, that you would not reach the polygon count that is necessary to mask the terrible cloth tearing.
  13. I could just think of rendering it with alpha background and maybe shadow catcher and comp the time remapped layer in post with frame interpolation. It's an incredibly hard problem to solve with changing point count. The interpolation would need all the data where the new points originated from, additionally to where they are, or it would look too buggy anyways. Alembic doesn't have that data and the caching doesn't have a remapping system yet. The old cloth circumvented this by never changing point count but instead hid/deleted the polys around the torn edges and deactivated the sim locally.
  14. You can duplicate the field layer setup, but then you have to duplicate the field objects in the OM and drag the respective copies on the field layer copies.
  15. The Stiffness of the past is just inverted bendiness. So bendiness in the tag is what you are looking for. That is what primarily governs the visual bendiness of the simulation. Simulating stiff materials with a soft object simulation however needs a lot of iterations and even more when you have higher resolution meshes. If you don't have enough substeps or iterations in relation to the mesh density, it will appear also very bendy and not stiff, because the solver cannot achieve what you have set up for low bendiness for one simulation step and loses against gravity, collisions and other forces on making the cloth stiff. So having high res meshes also requires high substeps and or high iterations in the solver settings, otherwise the low bendiness you've set up cannot be achieved. Invertedly you can say, a lower res mesh is easier and faster to make stiff. If it's supposed to be stiff you luckily often don't need the mesh resolution to show details like wrinkles, so creating a low-res sim is fine. You can always simulate a proxy and transfer the simulation to a higher version with a mesh deformer.
  16. Entries in the fieldslist are called "field layers". What you are cloning is the layer, that may or may not get it's effect and most setting from an external object or field. Remapping layers, for example do not. Field object layers do however. So if you clone the layer, it will still reference the same field object. You can duplicate the field in the OM and drag the new one on the cloned layer to replace the reference. The main concept of field objects is the reuse of the same field in multiple lists. This results of course in no clear assignment of a field object to a specific list and thus comes the field object layer concepts that reference a field.
  17. You're welcome. I think at the time Thanassis didn't know about the future development. What he did however know, is that we are not gonna stop on the topic (if plans don't change), and have a relatively large team working on it. Hard to claim without a working cristal ball :), who knows, maybe some higher level management decisions make our team's plans obsolete. We usually hear about these not much earlier than the public. If somebodies guesses for "down the track" are based on wrong assumptions, you can estimate the worth of those.
  18. That was before the 2023 release and he was referring to the softbody and mix animation features that are already released. I can leak this much: there are no particles in the pyro simulation. If EJ said that, then maybe he was wrong.
  19. Yea ML = Maschine learning. So instead of having a math formula you train a (hopefully fast) neural network, that you can evaluate to get the distance. Without a question this will become big in rendering and will enable all sorts of things. Afaik Spectron by otoy is a SDF raymarcher ( all the fractals can be described as sdfs.) But not sure :D. There is not really a problem combining the two. Only difference is finding the hitpoint when you start at a pixel. Then you can get a normal of the SDF just by calculating the gradient of the distance formula at the hitpoint. Is that slower? Well you need more steps to get to the hitpoint and it always depends on how slow the formula is, but GPUs are crazy good at this. Just check out shadertoy. Everything that looks like a 3d scene is using a raymarcher. Regards Fritz
  20. Hi Happypolygon, the two concepts are pretty similar, but have some differences. SDFs (Signed Distance Fields) describe the distance to a mathematical surface. At any point in space you can check how far you are away to the closest point on that surface. The "Signed" in SDF just defines if this is inside of a surface or outside. This sign is what makes many boole operations and other effects easily possible by just combining these function for surfaces in a specific way and then you get a combined function that gives you again the signed distance to the combined surface. SDFs in the volumebuilder do that, but discretize the distance in a volume (voxel). This has the biiiig advantage that you baked down the formula that supports everything. Finding the formula to represent an arbitrary mesh is very very hard and is doing babysteps with ML right now. With voxels you can just use a slow algorithm and bake it down to voxels. This of course then is just as precise as your voxels are smal. For infinite smal voxels, this becomes the correct SDF again. You can render SDFs with many methods but the most common are raymarching and meshing with marching cubes. Raymarching you step through a pixel always half as far as the SDF tells you the closest surface is. Then you do that until you either left the scene in the back or you are super close to something and decide that you hit the 0.0 value of the SDF. Marching cubes is voxelizing the SDF and creating mesh patches where a voxel has corner values that are >0.0 and <0.0. Metaballs are pretty much the same thing (math is a bit different and they don't describe distance to a surface but more a volume description of the surface), just that usually only spheres are used and the blobby effect which comes from combining the formulas differently. You usually don't do "Union" or "Subtract" but metaballs are added with their volume describing formula. That creates this volume preserving effect that SDFs don't have if you move them inside of each other. There is ways to have a bit of that effect in SDFs. Rendering for Metaballs is pretty simular. Raytracing/marching or Marching Cubes. Why does not everybody raymarch it ? Well... then you have the thing in your software. The viewport needs to support it. The renderer needs to support it, and there is pretty much no export to anywhere. https://www.womp3d.com/ these guys are building an SDF modeler without meshing. Lets see how that goes. Regards Fritz
  21. Fritz

    Stray Points

    I don't think those are stray points. Is it possible those are null objects? They show up as black dots in the viewport.
  22. Valid argument. Acquisitions to speed up the growth of what MAXON One offers makes the devs previously working on the other products stop improving those. That's because of... physics and such..
  23. you are the one assaulting. Always. I don't even know who Jeff H1 is and I don't like being called a puppet. Please just stop rating peoples posts and insulting them always.
  24. At least I am not trying to convince you that you are wrong. You have proven time and time again that you have no sense of self reflection and I honestly don't care about your opinions. I just can't stand you constantly attacking people with your weird sense of entitlement.
×
×
  • Create New...

Copyright Core 4D © 2024 Powered by Invision Community