Jump to content

Fritz

Maxon
  • Posts

    192
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by Fritz

  1. Gravity is not the only force that can be applied to the objects. If you have no gravity and no other objects and no forces, then yea. You don't need mass. But then you don't need a simulation, because nothing is moving. If it was always with default gravity and no way to change it, then yea, you could have a weight setting and internally calculate the mass. But that's not the case.
  2. As you said correctly, the weight is defined by mass times gravity. Since you can change the gravity and all the forces independently of the object settings, defining a weight would mess with that. The definition has to be the mass, so forces can calculate their influence on the object. In a way the mass with gravity present is the same as weight, just scaled. Since mass is mostly important relative to each other, you can interpret it as weight if you so must. Forces need to be adjusted accordingly though. What you are trying to achieve there looks really hard to simulate though. Good luck 🙂 let us know how it goes. Little tip: the simulations are approximations and some things are far off from reality. Rigid and soft body interactions depend also on the mass of the softbody. In real life I'd say the mass of the surface is barely important and only the elasticity. The relative mass of softbody and rigid body is in the simulation very important to how well the rigid body is held up.
  3. Redshift does not render signed distance field volumes as surfaces. Only float values as fog/density. Here are the steps you need to do: Put a volume material on the volume builder in fog mode. Set the voxel falloff a bit higher in the volume builder for the object you want to render. Go to the volume material and there is a slot for density or scatter. On the right from the presets (small arrow) choose volume builder. This sets the volume builder volume as the density of your volumetric effect. Now it should show up. No RS volume object or VDB export necessary, but that would also be possible if you set up the density channel properly.
  4. area weighted normals _ phong lines_fixed.c4danother try. the old version only crashed on load and was unrelated to the node stuff in the scene. If you want to talk further, please contact me on backstage. I am apparently not wanted here and my comments are being deleted without reason.
  5. If you open a file, the folder it is in becomes a watch folder for the AB. Maybe move it to an empty folder before you open it. There is no endless loop in a node. Nodes don't have while loops or anything that doesn't have a fixed limit.
  6. Jepp, have the texture tag set to camera or frontal (whatever you have set up) then have the texture tag and the object selected and call "generate uv coordinates". The effectors have a "selection" slot. Drag the mograph selection in there.
  7. area weighted normals _ phong lines.c4d
  8. The undesired lines you are seeing are results of the boolean creating points on the edges. You can see that when calling current state to object on the boolean. Probably a result of how it internally calculates the boolean with triangles and quads and then reconstructs the ngons. Thes points are then correctly preserved by the bevel. The solid bevel appears to have some issues with the collinear points on ngon outlines. The chamfer node, the bevel deformer or the bevel tool are all the same code. Don't expect differences there. they just behave different in what happens if they fail. Bevel generator appears to have been implemented to not return anything while the node version returns the input. Both still do nothing. Solid bevel is also much more different to a standard rounded bevel as you'd expect and has it's own implementation of many things (thus the different behavior in this case and the less robustness)
  9. Make the input cube editable and put the material on there, create the uvs on that object how you'd want them. If that looks good to you with using uvs, then turn the voronoi fracture on. This will preserve the uvs you had and also cut them correctly apart on the surface. Then turn on the outside faces selection. These selections are generated and added to the fracture pieces, so you can assign materials to them. These are referenced by name, so you can assign this to the material being on the child input. Material and selection do not need to be on the same object.
  10. Thank you cerebral, that is the command I meant and works. I just tried it after posting and was surprised the "Set uv from Projection" doesn't have "active camera" as an option. Stick texture apparently now called pin material tag, but also couldn't get it to work with camera projection.
  11. Sounds like a job for stick texture or maybe on the input objecte, create uvs that match the projection (there are commands for that). The uvs survive the fracturing. It is lossy though, because uvs are linear between points, while projections evaluated at render time can be independent of mesh resolution. Don't expect the result to look the same as the projection. You can combat that by increasing mesh density.
  12. gradient.c4d this uses the bounding box circumsphere to remap the min and max values. it's a bit more stable. the null objects show the min and max and are driven by xpresso. then their position is used in the shader.
  13. zerosixteosixes solution is pretty much how I would approach this as the best solution. The nearest-furthest range remap could be added to that. His solution is camera distance, I would have done view plane distance. Similar, but camera distance has a spherical bend to the edges of the image. Could be intentional of course.
  14. Might help if you let us know what render engine you are planning to use if you want it as a shader. As vertex colors or vertex map it should be very easy. Put a vertex color tag on all objects that are supposed to have the effect and turn on fields. Put a linear field in the field list. This is you 3d gradient direction. Turn on remap and set a color gradient. This should even preview correctly in the viewport. It is however limited in precision to the density of your mesh. This works on generators with 2023.0.0 and up (maybe S26, but can't remember). A proper shader is doable. With OSL alone you can do near to everything. Even without this doesn't sound to hard. If redshift or standard/physical are the render engines of your choice, let me know. Clamping furthest point to camera is a different story as this is rarely information available in a shader. Shading happens locally and you cannot compute this information. You could set this up with some math and xpresso, to create that information at scene evaluation time and use that in the shader. This is however a bit more complicated.
  15. PForceObject node is what you are looking for. Pipe a P Pass group through it and link the force you want to use.
  16. To spare you some pain, it was not only renamed. It works very differently now and the tutorial is unlikely to still be applicable. There is a legacy mode in the basic tab of the tag though if you really must follow the tutorial. In general the new version works a lot better and there should be tutorials out there for that.
  17. If you are on a subscription try the "Remesh" generator. Since a few versions it has a new algorithm that might be able to handle this. It's meant to create a new tessellation on the surface shape you already have that is made up of regular sized quads. Sometimes it's magic, but not a 100% guarantee that it handles every case well enough. Alternatively if you shape is flat you could try to select all > melt > remove ngons. This will re-tessellate the shape, but not add new points on the surface. Don't think in your case the result will be much better though. Voronoid fracture just cuts the object in pieces based on voronoi patterns. If you still have trouble, let me know, if I am close to a PC I can have a try with your scene.
  18. This is just not what it does. This "classic object" node is just import to the node setup. The scene root is the only output. What you are doing is setting the position to the same position on the imported copy. If you'd output that as a copy, you might see the result. For driving parameters the nodes cannot replace xpresso.
  19. That's a custom node generator he build, named, and chose an icon for.
  20. Pyro RS material is just a preset volume material and can't really be the reason. That is supposed to work, so if it doesn't it is a bug. To me it sounds like the VDB files are not transferred correctly. Another thing you could try is to "save with Assets" to a new location and try from there.
  21. Try to load the vdb sequence in a volume loader, replacing the pyro output. Deactivate all pyro in that case. If that also doesn't work I'd suggest to contact Maxon support to see if they can help or report a bug.
  22. Rigid body simulation is untouched in newer versions so don't worry about that. The best would be of course to have a scene that shows your setup, but here are some things you might not have tested as deeply or info on what might be the right direction for settings. Sometimes it needs multiple changes: - set collider shape to convex hull. Next to box it is the most robust and it should push inside objects out. - steps and max iteration should help if you increase them. - scene scale can help. Is it small objects getting stuck? Try reducing it instead of increasing it maybe? - higher bounce might help on getting a stronger collision response, if the collision is detected but can't be quickly enough resolved. As you can see, pretty much what you tried. Hope it still helps somehow :/. Cheers Fritz
  23. The metric is something more in the direction of where you would imagine the "outside" of the shape to be if you close the segment. That's why it flips at some point when you edited it. We just recently had a post in the forum with a similar issue, where a 2D constrained rope simulation also flipped orientation in the extrude all the time. Some of these "automatic decision works intuitively 95% of the time" internal calculations break the most basic requirement to an effect.
  24. Make the extrude editable and check the surface orientations. Mystery solved. Why does the extrude do that? Because it cannot guess intend. Each segment is processed individually and the surface orientation is based on some internal metric.
  25. There is no topology. CAD tools most of the time use freeform surface modeling kernels that do not have polygons and tessellation, but surface descriptions. There is then a variety of export tools that can tessellate these surfaces into the discretized mess that is used for video games, vfx, visualization and such. For manufacturing the tool and machines the data goes to often decides the precision it can be manufactured in, so giving it continuous data guarantees no loss of precision in the data coming to the tool that creates the part. Most of the cad tools license the kernel, as writing this yourself is decades of work to reach the level of established kernels. An example is Parasolid by Siemens. These aren't cheap however. If I remember correctly, Kent tried to crowdfund the plugin development and licensing of a cad kernel plugin for C4D once and the number wasn't small. What you see in plasticity is basically a nice UI and tools to construct data inputs for a licensed kernel, that you then can exchange or convert to meshes. Some polygon mesh based programs like C4D also offer import of CAD data that are then tessellated on import. The main point is: unless they have an exporter to mesh exchange formats in plasticity, the responsibility for the discretized mesh is the importing programs.
×
×
  • Create New...

Copyright Core 4D © 2023 Powered by Invision Community