Jump to content

Fritz

Maxon
  • Posts

    192
  • Joined

  • Last visited

  • Days Won

    3

Fritz last won the day on March 18 2023

Fritz had the most liked content!

3 Followers

About Fritz

  • Birthday January 1

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Fritz's Achievements

  1. Gravity is not the only force that can be applied to the objects. If you have no gravity and no other objects and no forces, then yea. You don't need mass. But then you don't need a simulation, because nothing is moving. If it was always with default gravity and no way to change it, then yea, you could have a weight setting and internally calculate the mass. But that's not the case.
  2. As you said correctly, the weight is defined by mass times gravity. Since you can change the gravity and all the forces independently of the object settings, defining a weight would mess with that. The definition has to be the mass, so forces can calculate their influence on the object. In a way the mass with gravity present is the same as weight, just scaled. Since mass is mostly important relative to each other, you can interpret it as weight if you so must. Forces need to be adjusted accordingly though. What you are trying to achieve there looks really hard to simulate though. Good luck 🙂 let us know how it goes. Little tip: the simulations are approximations and some things are far off from reality. Rigid and soft body interactions depend also on the mass of the softbody. In real life I'd say the mass of the surface is barely important and only the elasticity. The relative mass of softbody and rigid body is in the simulation very important to how well the rigid body is held up.
  3. Redshift does not render signed distance field volumes as surfaces. Only float values as fog/density. Here are the steps you need to do: Put a volume material on the volume builder in fog mode. Set the voxel falloff a bit higher in the volume builder for the object you want to render. Go to the volume material and there is a slot for density or scatter. On the right from the presets (small arrow) choose volume builder. This sets the volume builder volume as the density of your volumetric effect. Now it should show up. No RS volume object or VDB export necessary, but that would also be possible if you set up the density channel properly.
  4. area weighted normals _ phong lines_fixed.c4danother try. the old version only crashed on load and was unrelated to the node stuff in the scene. If you want to talk further, please contact me on backstage. I am apparently not wanted here and my comments are being deleted without reason.
  5. If you open a file, the folder it is in becomes a watch folder for the AB. Maybe move it to an empty folder before you open it. There is no endless loop in a node. Nodes don't have while loops or anything that doesn't have a fixed limit.
  6. Jepp, have the texture tag set to camera or frontal (whatever you have set up) then have the texture tag and the object selected and call "generate uv coordinates". The effectors have a "selection" slot. Drag the mograph selection in there.
  7. area weighted normals _ phong lines.c4d
  8. The undesired lines you are seeing are results of the boolean creating points on the edges. You can see that when calling current state to object on the boolean. Probably a result of how it internally calculates the boolean with triangles and quads and then reconstructs the ngons. Thes points are then correctly preserved by the bevel. The solid bevel appears to have some issues with the collinear points on ngon outlines. The chamfer node, the bevel deformer or the bevel tool are all the same code. Don't expect differences there. they just behave different in what happens if they fail. Bevel generator appears to have been implemented to not return anything while the node version returns the input. Both still do nothing. Solid bevel is also much more different to a standard rounded bevel as you'd expect and has it's own implementation of many things (thus the different behavior in this case and the less robustness)
  9. Make the input cube editable and put the material on there, create the uvs on that object how you'd want them. If that looks good to you with using uvs, then turn the voronoi fracture on. This will preserve the uvs you had and also cut them correctly apart on the surface. Then turn on the outside faces selection. These selections are generated and added to the fracture pieces, so you can assign materials to them. These are referenced by name, so you can assign this to the material being on the child input. Material and selection do not need to be on the same object.
  10. Thank you cerebral, that is the command I meant and works. I just tried it after posting and was surprised the "Set uv from Projection" doesn't have "active camera" as an option. Stick texture apparently now called pin material tag, but also couldn't get it to work with camera projection.
  11. Sounds like a job for stick texture or maybe on the input objecte, create uvs that match the projection (there are commands for that). The uvs survive the fracturing. It is lossy though, because uvs are linear between points, while projections evaluated at render time can be independent of mesh resolution. Don't expect the result to look the same as the projection. You can combat that by increasing mesh density.
  12. gradient.c4d this uses the bounding box circumsphere to remap the min and max values. it's a bit more stable. the null objects show the min and max and are driven by xpresso. then their position is used in the shader.
  13. zerosixteosixes solution is pretty much how I would approach this as the best solution. The nearest-furthest range remap could be added to that. His solution is camera distance, I would have done view plane distance. Similar, but camera distance has a spherical bend to the edges of the image. Could be intentional of course.
  14. Might help if you let us know what render engine you are planning to use if you want it as a shader. As vertex colors or vertex map it should be very easy. Put a vertex color tag on all objects that are supposed to have the effect and turn on fields. Put a linear field in the field list. This is you 3d gradient direction. Turn on remap and set a color gradient. This should even preview correctly in the viewport. It is however limited in precision to the density of your mesh. This works on generators with 2023.0.0 and up (maybe S26, but can't remember). A proper shader is doable. With OSL alone you can do near to everything. Even without this doesn't sound to hard. If redshift or standard/physical are the render engines of your choice, let me know. Clamping furthest point to camera is a different story as this is rarely information available in a shader. Shading happens locally and you cannot compute this information. You could set this up with some math and xpresso, to create that information at scene evaluation time and use that in the shader. This is however a bit more complicated.
  15. PForceObject node is what you are looking for. Pipe a P Pass group through it and link the force you want to use.
×
×
  • Create New...

Copyright Core 4D © 2023 Powered by Invision Community