Jump to content

pfistar

Registered Member
  • Posts

    115
  • Joined

  • Last visited

Everything posted by pfistar

  1. Hi bezo! Many thanks for the reply. This looks good, though my problem has acquired another dimension in the meantime. So what I'd like to do is animate the textures or shaders over time, so that at first keyframe, the colors of the clones are randomized (by way of a MoGraph Multi material and a Random Effector) and by the last keyframe, each clones' color is determined by its Mograph Selection tag and the Shader Effector assigned to that tag. To rephrase, I would like to blend these colors over time so that at start of animation, the colors are randomized, and they are eventually overridden by their new color. I gave this a try, though don't seem to be getting any results. The effect of Shader effector does not override the underlying MultiShader. I played around with different settings under the Shader Effector's 'Shading' tag, but nothing seems to make a difference. I clearly don't have a good technical understanding of how these Effectors actually work. Would you be able to offer further insight? Thanks again, Nik signal-intensity-graph_for_forum_0002.c4d
  2. Hi all, As the title implies, I would like to assign materials to clones in a Cloner array based on their 'MoGraph Selection' indexes. Is there a way to do this without splitting the clone array across multiple cloners? It might be relevant to note that my cloning mode is set to 'Object' and the object I'm using as a template is actually a Matrix object that references a geometry mesh. File attached: Many thanks for any reply, NpF signal-intensity-graph_for_forum.c4d
  3. So sorry about that!!! File attached... https://www.dropbox.com/s/z4388jlrmaz3zm5/TP-disintegrate-on-impact_3groups_v02.c4d?dl=0 File size is somewhat large due to MoGraph cache
  4. Hi digitvisions, I appreciate the reply, though I'm actually trying to avoid this as I'm trying to hit a deadline and hope to re-render as few frames as possible so ideally I'd like to not have to adjust any settings before Group.2. I'm still very much a novice at Xpresso, but I wonder is whether it's possible to trigger a scale-down by some means that does not involve the 'stuck' particles being passed to a new group. I'm inclined to think that I could extract a velocity value per particle using PGetData node and then single out any particles hitting a zero or near-zero velocity value, and then use those values to trigger a scale-down. Just thinking aloud here, however, I don't have much of an idea of how I would set this up properly. Another possible solution (though again, no idea how this would be done): Decrease the particle life-span in the emitter node, and somehow trigger the scale-down so that it begins to happen at x-amount of frames before the particle dies. Thanks again for the reply! NpF
  5. Back again to report that I've made some headway on this (mostly thanks to tutorials by Athanasios Pozantzis, aka NoseMan, and Eric Liss ) So I've managed to get an effect that fades the scale of each particle based on Age/Life.I've also managed to use ParticleGroup2 to emit secondary particles. Here's a gif of what that looks like:https://www.dropbox.com/s/alyu0ictaqkaoh1/TP-disintegrate-on-impact_3groups_v02_precached.gif?dl=0 Also, my Xpresso network:https://gyazo.com/a5fbd333a945a53e78db97015f25ad9c There are still a few things unresolved however. Hoping someone can offer some tips... 1) The first particle group (Group.1 = yellow) hits a PDeflector with a small bounce setting. The PDeflector Event creates a new group (Group.2 = green). However, it appears not all of the Group.1 particles get transferred to the new group upon impact with the Deflector mesh. Some remain yellow and stuck to the Deflector surface. While it's not necessarily critical to me that these stragglers get transferred to the green group, but it is imperative that they fade away rather than simply die out. Can anyone suggest a way to use the PDeflector Event as the trigger to begin a scalar fade to zero? 2) I am using a MoGraph Multishader with Randomize Effector to randomize color on the particles. While it appears the color assignment stays as is when Group.1 becomes Group.2, Group.3 gets an entirely different assignment which would be great to override. Ideally, the Group.3 particles would inherit the color assignment of Group.2. I could be wrong but I would guess that colors are assigned based on point numbers. Can this be overridden? Could the random value assigned to Group.2 be passed on to override whatever random values would happen on Group.3? Once again, thanks ahead of time to any responders! Cheers, NpF
  6. Greetings all! I’m wrestling with another ThinkingParticles problem. This time, I’d like to give the effect of small-ish spherical projectiles colliding with a surface and disintegrating into smaller particles which disappear of a few frames (maybe 12 to 24 frames). The effect is for a scientific illustration but does not need to be particularly realistic. I see the step-by-step as looking something like this: Particles are spawned from a geometry mesh emitter They’re driven by some wind or gravity until they reach another geometry surface which is set as a collider They bounce away from the collider and each particle’s scale fades to zero on all axis over a few frames At the same time, a new particle group is created from the collision event (based on position and time of impact) and each particle in this group spawns a new set of particles The new particles explode outward from each original particle (like fireworks) but these would also scale to zero over time (fairly quickly maybe 6-12 frames) and then die once they’ve reached a scale of zero At the very least, I’d like to accomplish the scale fade on the original particle group, though the ‘explosion’ would be cool too, presuming it’s possible to do with C4D’s native tools. Attaching a work file that shows my so-far failed attempts. Thanks ahead of time to any responders! Cheers, NpF thinkingparticles-disintegrate-on-impact.c4d
  7. Hi all, I’m attempting to create some randomized point clouds using the “Volume” distribution method and nested MoGraph Cloner hierarchies. In this case, I have some cloned cubes with randomized transforms as the upper hierarchy, and then the lower hierarchy is cloning spheres and distributing them according to the volumes of those cubes cloned in the upper hierarchy. Here’s a screenshot of my set up so far. Left side: This is the desired effect, though in order to achieve this, I had to collapse the upper Cloner hierarchy of cubes down to a group of mesh objects. Middle: This the pre-collapsed Cloner hierarchy I started with. Here, the sizes of the individual cloned sphere clusters inherit the scale value the cubes they are cloned to. I understand why this happens, but the reason I’m posting this is that I’m wondering if anyone knows a way to override the instantiation of specific parameters, in this case it would be ‘Scale’ and the random seed on ‘Volume’ distribution. Right Side: This was simply a test to see if the way the sphere (level 2) clones would be distributed the same way for each cube (level 1). The answer is ‘yes’, as is evident. As always, thanks to all responders ahead of time. Cheers, NpF
  8. Hi Hrovje, I finally had a little time to slow down and explore this problem. It turned out that the problem was not with the Cloner's instance function. It appears it was rather because I had cached my sim while my viewport LOD was not set to 'High', and along with that, there's a setting that translates that Low/Med/High setting to your CPU Render settings. I'm pretty sure this was the problem, though it took a little while to parse this as there were a number of other factors which I thought might have been suspect with this combination of features. (ThinkingParticles | TP Groups | MoGraph Cloner | MoGraph Cache | MoGraph Shader | RandomEffector | viewport LOD feature | and also the Takes function ) For what it’s worth, here’s a pared-down version of the scene, though with the Cloner Cache inactive so as to not have to link you to another huge file. https://www.dropbox.com/s/2i66pw36zotz5hj/TP-mograph-cache-v03.c4d?dl=0 For my own clarity, I made a quick index of dependencies for once a TP sim has been cached. Happy to share it in this thread if you’re interested at all. Thank you again for your help and insights! NpF
  9. Thanks for this. I'll be sure to give it a try as soon as I get into r21.
  10. Hi Hrovje, Many thanks for this reply also! Apologies for heaviness of scene. When I have a spare minute I'll parse out whatever's less than necessary and send a new link. I tried turning "render instances" off, though I'm still getting the same result, where only a fraction of the clones show up at render-time. I also pushed up the "Max Particles" count in the Thinking Particles Settings panel, just to see if this would do anything, though I suspect this is not where the problem lies. Otherwise, I'm not sure what you mean when you say "try using instance mode..." Does this mean first creating an instanced version of the cloned sphere object and using that under the cloner object? Or could it mean some function in one of the TP nodes in Xpresso? Sorry I'm a bit confused. I should mention I seem to be having the same problem in another scene as well where I'm using Cloner with any ThinkingParticles that have more than one Particle Group within a hierarchy. Does this sound familiar at all to you? I may just end up using brute force solution of using one Cloner object per Particle Group. Thanks again for your reply and your insights! Nik
  11. Hi Hrvoje Thanks very much for the response. I do not have r20, unfortunately, though I managed to get the general effect I was after with a simple ParticleAge node , and the right decay setting on a Gravity node. I'd be curious about a MoGraph solution, however. But it sounds like you might be saying that the function I'd be after is in r20 only? Thank you again! Nik
  12. Hi all, I have a MoGraph Cloner setup set to Object mode, where a TP particle group is being used as the framework for cloning. The object being cloned is a simple mesh sphere, though the Cloner has MoGraph Multi-Shader applied to it so that the particles are 5 different colors. After a couple test renders, I'm noticing that once I have cached my clones to disk using the MoGraph Cache tag, C4D only renders a fraction of the clones. Here's a side-by-side-comparison: https://www.dropbox.com/s/k4xq8teg9bzfdms/cached_v_uncached.mp4?dl=0 Also, here's my scene file: https://www.dropbox.com/s/4nhz9hnjn5if40o/VID1_Shot05_Disintegration_v13_mograph-cache-test_more2.c4d?dl=0 I did a test in AfterEffects to make sure things line up, and sure enough they do - the only problem is that the cached version is missing what looks to be about half of its clones. Is this normal behavior for the tag, or have I missed something in the setup? Thanks ahead of time for any insights... NpF
  13. Hi all, I'm trying to figure out how I might set up a particle emitter system using TP so that EACH particle that is birthed from a sphere moves at high speed at first and then slows down the farther it travels from its source, at which point a wind or gravity force carries it off in new linear direction. I was playing around with the Texture & Light parameters in the PMatterWaves node, along with some forces nodes (Gravity, Wind, etc) but not really getting the results I'm after. Another potential solution I figure might work would be some kind of setup based on each particles' distance traveled, where a particle can only be affected by an extraneous force after it has traveled past a distance threshold, though I don't know how to set this up either. Link here to a downloadable file where I'm trying number of different things (but again, not quite getting the results I'm after.) Hoping someone can lend some insight. Many thanks! NpF
  14. Hi Hrvoje - Thanks for the suggestion. I just gave it a look though I don't see how that would work since the gizmos in the viewport represent nodes that only seem to exist as objects within Xpresso. Unless there's some secret I don't know, I'm not certain there is a way to access those Xpresso nodes as "objects" via the Object Manager, other than by clicking on the Xpresso tag to open the Xpresso window.
  15. Hi all, An esoteric but simple question: Anybody know of way to make all Xpresso gizmos invisible in the viewport? In my specific case, I have a few XParticles Wind nodes in the scene, and each has its large plane-&-arrow gizmo to represent the wind's direction. This is all well and good, though I like sending clients hardware playblasts and it would be nice to not have those extra distractions. I tried all options in the Display panel and pined through Preferences as well but couldn't find the right check-box. Many thanks NpF
  16. Thanks very much natevplas for the quick response! Exactly what I was looking for!
  17. Hi all, Wondering if anyone knows a way to override the Up Vector of clones on a surface. In short, I have a human cell model that's surface is animated using Displacer and Random deformers. I need to scatter some small receptor structures on its surface, which I'm doing using Cloner's Surface Distribution function. The position of the clones follow the animated surface, which is what I want, however the clones also inherit the normal direction of the surface (polygon, or point, I'm not sure which) which is something I don't want. Rather, I'd like the normal direction to be inherited from the center point of the object itself, so that the clones' positions follow the surface, but they always face directly outward, rather than following the vector of the nearest poly or point. I'm guessing this might be a job for one of the MoGraph Effectors, or alternatively there's some Xpresso-based solution, but I haven't figured it out. Also: Is there a way to make a sub-selection of polygons on the cell object, after I've applied the Displacer Deformer and Random Effector. "Active selection" doesn't seem to be available after these are applied to the base mesh. Thanks ahead of time to anyone with tips! NpF receptors_minimum.c4d
  18. My guess was that it might depend on farm's particular setup, but many thanks for the response and for lending a little more clarity! -NpF
  19. Greetings to Hrvatska, from Brooklyn. Thanks for the fast response - and for the clarifications! 'Tis a pity Xpresso keys can't be shown in powerslider, especially for non-dual-monitor types like me. Best, Nik
  20. Greetings all, Hoping someone can clarify a few things regarding the MoGraph cache feature, particularly as it would pertain to network or farm rendering. I understand the difference between caching to RAM and caching to .mog file, but if I have some Cloners cached to RAM, will the C4D file hold onto that cache info, or will I have to run a re-cache if I quit and re-open C4D? Regarding the use of .mog files and remote render farms, is there any standard nomenclature for the folder name I should save the .mog sequence to, or does this tend vary from service to service? Furthermore, is it generally better practice to render out the .mog file for farm rendering, or does simply rendering the cache to RAM/C4D file tend to suffice? One more related question: would I see any difference in CPU render-time between caching to RAM/file vs. caching to disk? Many thanks, NpF
  21. Greetings all, Something of a newbie question that's been bugging me for a few weeks now. I'm wondering if there's a way to get keyframes created in Xpresso to show up in the main timeline (below the viewport). I find it a bit of a hindrance to have to open up the F-Curve or Dopesheet to move keys around everytime I need to make an adjustment. Many thanks! NpF
  22. @Jed, Many thanks. I appreciate the explanation! I think this clarifies quite a bit.
  23. Wow, this is great - thanks very much for pointing me in this direction! As I mentioned, I'm quite new to Xpresso (but I should mention I have been studying Houdini for the past year or so, and the approach is somewhat similar). Looking at the way you amended my file, there are still a few things that are a little mysterious to me: So the tag that contains the main setup (Iteration > LinkList > Object > PMatterWaves, etc) has to somehow reference the "Global Velocity" data on each of the spheres, so based on your setup, I would assume the Global Velocity data gets assigned to the sphere object itself through their individual Xpresso tags, and this data gets collected in the LinkList node? I'm a little confused about what exactly the "Tag" operator node and the one next to it called "Xpresso" are actually doing (see attached jpg). In the "Xpresso" node, it appears we're referencing the tag of the first object in the LinkList and calling on it's "Global Velocity" user data? Also, is there any danger in changing the names of the Xpresso tags? In other words, are the names read as string values, or are all references absolute when your working in Xpresso? Sorry for the continued naive questions - I'm trying to wrap my head around how this system works. Many thanks again, Nik
  24. srek, Thank you again for the suggestion! I appear to have figured things out as far as creating an iteration group and linking it to the emitter node so that particles are emitted across all objects at once! Attaching a new file to show. A couple follow up questions, if you would be so generous: I'd love to see the emitted particles inherit some velocity from the emitter geometry so that rather than leaving an immediate trail, they explode outward a bit from the spheres before drifting away. I've approximated the effect (badly) by keyframing a few of the emitter's parameters, but this is of course less than ideal. Here's something from an old post of yours: This makes a lot of sense to me in theory, though I can't seem to make it work in reality (please see my file. Though you're talking about PStrom I don't see how the same wouldn't apply to PMatterWaves). I can't get a green wire when trying to connect my Math node to the PSetData node. I'd guess this is what would happen when you try to connect incompatible data types, but I've set the mode to 'vector' as suggested. I'd also wonder where the Position Velocity is calculated from on the Emitter object (is it per point, or poly, or from the world position of each new particle at its frame of birth?, etc.) Also, a tangental question that's probably a noob question: Is there a way to enable keyframes created in Xpresso nodes to show up in the timeline under my viewport, the way most keys do? Thank you again, Nik XPresso-Iterator-v02.c4d
×
×
  • Create New...

Copyright Core 4D © 2024 Powered by Invision Community