Jump to content

Havealot

Maxon
  • Posts

    337
  • Joined

  • Last visited

  • Days Won

    6

Everything posted by Havealot

  1. @bentraje They are separate systems. The new simulations are all designed to run on your GPU. What you can't to is influence the simulation with scene nodes while it is running. However, you can use Scene Nodes to create your own customer emitter for example. If you generate points in a Node Mesh and use that in a Mesh Emitter, you can create your own initial positions and properties based on a node graph. This works especially well for one shot emission. The other thing you can do (at least in theory, I think there is a refresh issues at the moment), is to read particle simulation data in Scene Nodes and do something interesting with it. You could for example build your own custom tracer.
  2. @Leo D Looks like a priority bug where the cloner is overriding the rigid body positions with the particle positions (which it shouldn't in this setup). You can use instance mode to work around it, as Keppn suggested. But of course that is far less efficient. Or, you rely only on particle in your simulation and simply clone on the result without using rigid body. The new particle system doesn't have particle-particle collision yet but I assume the example in the help was done using the flock modifier to fake collisions (only keep the separation and turn of cohesion and alignment).
  3. You can add a Vertex Color tag to the connect and drag in the Cloner as a MoGraph Field Layer. It's not the most efficient way and it only works if the matrix of each clone also happens to be the the closest clone matrix for each mesh point. But if your clones are not overlapping and a bit separated, then this can work without baking.
  4. It is true that you can't drive the creation of particles with map, shader or noise yet. But there are two workarounds. You can either drive a polygon selection with map/shaders/noise and use that to restrict the emission, or you can emit evenly and instantly kill particles based on a map/shader/noise. Here is an example. The shader field drives a vertex color tag which is then used to set the particle colors on emission. Particles are emitted into a first group where there color is checked. If the it is below a certain value the particle is killed right away, if not it is moved to a permanent group where it will happily continue it's particle life. In case you wondering where the animation is coming from, that's set in the Shader Field with the remapping graph to cycle through different grey values of the texture. This is not the most efficient way as the buffers need to be cleaned up more frequently than needed, but you can achieve the intended effect. kill_by_map.c4d
  5. Try implementing it. And make sure it runs on CPU, GPU (not just on NVIDIA), ARM processors, all operating system, correctly deals with ACEs,...
  6. That can be done with a shader field in the vertex color tag. No need for Scene Nodes. But: You can do some cool scene node setups when it comes to scattering initial spawn locations for the particles and crafting your own velocities and such.
  7. That's fair. Autodesk just released Maya and Max 2025 in March 2024 😄
  8. @Johnny Maunason For this particular use case the Fracture object isn't the right solution. Just group your objects along with the effector (with deformation set to object) under a null and it should scale relative to the individual axes.
  9. @Jaee Is your setup correct? I assume what you are trying to do is to use the simluted (low res) version of a mesh to deform a high res net. But your deformer is a child of the wrong object.
  10. Looks like there is an issue with the triangle mesh option in the collider tag. Use convex hull or box. And if you can try to keep the "thickness" higher. It is used in the collision calculation and larger values make it more robust. Between rigid body objects this won't lead to a gap. In only adds to the gaps between rigid body objects and collider objects. And: Avoid initial setups with intersections of collision shapes. If you have a stack of stuff with all objects sitting flush on top of each other and the bottom one intersects with the floor collider, the fist execution will have a collision resolution that propagates through the whole stack. You can start we a relaxed state with a bit of a gap and let them fall into the stack via the simulation. Once they have come to a stable rest you can set that as the new initial state for the simulation.
  11. @LLS It is just a loop in which you can update variables in each iteration. In python it would look somthing like this: # Initialize your vectors and array. This is the equivalent of creating the initial ports on the outside of the Loop Carried Value and feeding them with initial values. start = c4d.Vector(1, 2, 3) end = c4d.Vector(4, 5, 6) hit_points = [] # Specify the range for your loop. In Scene Nodes the loop range is defined by the Range Node outside of the Loop Carried Value. In this setup it controls how many bounces are being created. for i in range(10): # This is the inside of the loop where you do your thing and then update your variables based on some logic. This will happen in each iteration. start = end end = start + c4d.Vector(i, i, i) # If you need all of the inbetween results later in the node graph you can store them in an Array by appending them in every iteration. array_of_vectors.append(end) #Now the range / loop is done and we are leaving it's scope. If you access the variables now you'll get the state of the last iteration for start and end and an array of all the iteration states for hit_points. Hope that helps.
  12. @HappyPolygon That is GLSL. I have managed porting a shader from GLSL to OSL in the past. But I am not sure if this one will work.
  13. @LLS One benefit you'd get in Houdini for this type of scenario would be direct support of volumes. Scene Nodes doesn't have volume nodes so you'll have work with a combination of classic C4D and Scene Nodes. Regarding multi instance: Of course Houdini has instancing. Regarding volume builder: I've never done any direct comparisons but I would assume it is pretty much the same. In many cases Houdini isn't much faster than other tools. But since there is automatic caching and a nice workflow to work with disk caches you often get better perceived performance over all.
  14. @hyyde This was just meant as a workaround. It's a known issue and will be fixed in the next release. Please still try if using default layouts gets rid of the crashes on your end. And unless you haven't done so, please also send reports for crashes that occur without plugins.
  15. @hyyde There is an issue related to the use of custom layouts. Are you using any of those? If it's an option in your case, maybe you can use the standard layouts until a fix is released.
  16. @HappyPolygon Not everyone in a software company is a programmer.
  17. Several things: a) I think the projection set in the material tag is irrelevant in your setup because you are using reprojecting with the UV projection node b) As I suspected the setup uses a distribution that is a result of effectors. Fix Texture "straight" uses the cloner distribution, which is not the state you want you clones to be in when prejecting. c) Last but not least I am not even sure redshift is even supporting the Fix Texture functionality I've attached a version that uses some hacks to kinda get there. Take a look, maybe that helps. In worst case you'll have to let go of keeping everything procedural and bake your projection into UVs for each clone. You can put them in a fracture object if you still want to use effectors for animation. Cloner_Pin_Texture_workaround.c4d
  18. I've just tired flat projection here and it worked as expected. Textures are stick when effectors are applied. Of course you have to adjust the scale of the projection to fit your setup if you don't want it to tile. Check your cloner without the effectors. As stated in the manual that is what will be used to stick the texture. I assume that's what you are missing. But hard to tell without the scene file.
  19. @Mantas Kavaliauskas Have you consulted the manual for this? https://help.maxon.net/c4d/en-us/#html/OMOGRAPH_CLONER-ID_OBJECTPROPERTIES.html?TocPath=MoGraph%7CCloner%20Object%7C_____3 "This method works with every projection type, except for UVW, camera and frontal mapping." Your setup is using camera mapping. That's why it doesn't work.
  20. Hard to say without the setup. But I assume the problem is that you are turning off the node ("on" port) and then try to set the enabled state.
  21. Geometry is the mesh in the local space of the object. You are not using the matrix of the landscape object. Transform the points first before using it in the distribution node.
  22. It doesn't only loop at 300. It loops at 60. The formula is "0.5 + sin( ( subfields + d) * f * 360 )", let's dissect it: - "0.5 + " you can ignore because it is always the same and therefore doesn't affect the loop period. - sin(e) is a trigonometric function of an angle. You can think of it as returning the y-coordinate of a point traveling around a circle of radius 1 for a rotation value. - Usually the rotation is provided in radians where 2 Pi equals a full rotation. The sin() function in C4D formulas uses degree though, so a full rotation is 360°. - The input to the function is the result of whatever is inside the brackets after the function, in this case "( subfields + d) * f * 360" - The subfields part you can ignore, because you setup isn't using any subfields, so it will always be 0 - So we are left with d * f * 360 - d returns the time scaled with the factor in the UI. Usually it would be 0 on frame 0 and than grow by 1 each second, but since you have set it to 10% it will grow by 1 every 10 seconds - f is the frequency and just acts as a factor (unless you animate it). In your setup it is set to 5. - * 360 is constant and is just used to get to the 360° for a full rotation. So by scaling "d" to 10% you made the animation 10 times slower and then by setting the frequency to 5 you made it 5 times faster. 0.1 * 5 is 0.5. Your sin(e) input will go from 0 to 360 degrees in 2 seconds. That's where to animation loop. You can make it faster or slower by changing f, d (by changing the percentage in the UI) or increasing / decreasing the "* 360" in the formula. If you had left d at 100% and f at 1 but use "0.5 + sin( ( subfields + d ) * f * 180 )", you'd get the same result. So you can change the loop period but not without making it faster or slower like you asked in your post. Hope that explains it. If you want to learn more about math you can find some awesome free resources online: - https://www.khanacademy.org/ - https://www.youtube.com/@3blue1brown Everything that is specific to C4D when it comes to formulas you can find in the manual: - https://help.maxon.net/c4d/en-us/#html/6194.html
  23. @Mantas Kavaliauskas It loops at other intervals as well. The length of your loops will depend on the formula itself and variables you use. This is not a C4D related. It's just math. The question "how do I change period/length of the loop without changing the general look/design and speed of the animation?" does not make sense. What you are asking for is a clock with hands that rotate faster than usual but and hour should still be one hour long. That's not possible. If you want more control over your loop length use noises with a loop period.
×
×
  • Create New...

Copyright Core 4D © 2024 Powered by Invision Community