Jump to content

pfistar

Registered Member
  • Posts

    115
  • Joined

  • Last visited

Everything posted by pfistar

  1. Hi all, I would like to use a Field (or some other method) to select points in a MoGraph Weighmap based on normal direction. The idea is that I’d like to create clones on only the Y-up facing surfaces of an object, (like snow on a roof, or something like that). I’m wondering if I can achieve this without a custom script, or Python field, or if it’s achievable by the Formula Field, what would the expressions be that would let this happen. Preemptive thanks for any responses, NpF
  2. Greetings all! I'm looking for some quick-fix tips on how to minimize Moire effects in animations where I have a high-res Displacement map with a lot of fine parallel lines, mapped to some simple geometry. What you see here has already been de-Moire-d in AfterEffects using the Median filter. The Moire-patterns were quite a bit more gnarly (see below) before the AE filter was applied: I'm hoping to find a quick solution, preferably involving post-processing (instead of re-rendering) BUT I am open to doing a re-render (c4d/redshift) if that's what it takes - I'd imagine maybe tweaking some settings within the Displacement material node, or the RS Object Tag Displacement Tab, or maybe in render output settings could help? If anyone has any tips, I'd be grateful if you could share! Link to C4D project file here (has some chunky bitmaps so can't upload to Core site): https://www.dropbox.com/scl/fi/iqj7tdsy6wpwdrf1wosbz/displacement-greebles-4-forum.zip?rlkey=f45qgeyx3w873ukpxjr1u55zy&dl=0 Many thanks! NpF
  3. Greetings all, I am interested in building a rig that would let me switch a character from keyframed or clip animation to a ragdoll/physics driven animation. Here’s a great example of someone doing exactly that in a project to great effect: https://www.instagram.com/reel/Cwk_DqNNi74/?igshid=MTc4MmM1YmI2Ng== Though I've hunted high and low, unfortunately I haven’t found a single tutorial that shows how to set something like this up, so I would wonder whether one exist at all, despite the heaps of tutorials out there showing the various ways to rig up a mannequin using the Connector object and Ragdoll function. I’ve made my own attempts but have only gotten as far as matching the bone-transform-coordinates of a ragdoll skeleton to a clip-animated one, at a specific keyframe, and then turning on/off visibility from one mesh-object to the other at render-time. Seems like cave-man method, and kinda half useless if I need to render in-scene motion blur. So I’m thinking there must be slicker way to do this. It’s come to my attention that there’s a plug-in for this - https://www.youtube.com/watch?v=mmiKN2F8SWk Has anyone subscribed to this? It looks GREAT! Though I would still feel more satisfied if I was able to build the rig I need myself, rather than paying $$ for it. In any event, I’m hoping someone can either point me to the right tutorials, or suggest the proper method, if you’ve done something like this yourself (created a rig that lets you switch between keyframe or clip animation, and ragdoll physics). Preemptive THANKS for any reply! NpF
  4. Greetings all, I’m running a fairly simple soft-bodies test scene with 5 seconds (120 frames) of animation. I’ve got a simple filleted cube object that’s cloned 12 times and cached at the Cloner level before applying a Connect generator and Subd Surface. I assume this is best practice though I wonder if that’s correct. I am trying to avoid obscenely high cache-write times and file-sizes, while holding to the general integrity of the soft body dynamics. I did try a more geometry-dense setup (4x polys on the cubes and 24 clones instead of 12) though cache-write times were prohibitively high. Here’s a link to my project file: https://www.dropbox.com/scl/fo/72askj4i2y0dv84sqvah9/h?rlkey=bpfrbt5cgtbnwp72xwjowzozd&dl=0 All said, I have a number of questions: When working with Soft Body Dynamics within a Cloner, is it generally best to apply the cache at the Cloner level? Are there any known shortcuts and/or workarounds to speeding up cache-write times on geometry that’s being deformed as a soft-body? Can the animation data of a soft-body object be cached to disk, rather than written to file? If I wanted to add more objects to to the sim, for example, an array spheres that’s contained within one of the softbody objects, as if inside a plastic bag, what would be the best (most resource-efficient) way to sample the geometry of the soft-body so that it could act as a membrane or barrier that holds the dynamic rigid-body spheres (effected by mass and gravity) inside itself. For soft body dynamics, is using the Bullet tag generally more recommended for contemporary setups, or is the Soft Body Simulation tag more common these days? I also noticed that adding a Redshift object tag anywhere in the geometry hierarchy makes all the geometry disappear completely from the viewport - why does this happen? Preemptive THANKS for any reply! NpF
  5. C4D: v2024 recently installed | repeated systems crash during Redshift render sequence Greetings all, I’m running a fairly simple soft-bodies test scene, and I’ve run into several total system crashes while trying to render a short (120 frames) sequence. I suspect it may be as simple as my system overheating. I’ve been keeping the front panel of my computer open so air movement is less obstructed and that seems to make things less crash-prone, however, I did experience one single crash with the front panel open. Also, the Windows-blue-screen-of-death error-class that I get repeatedly says: Stop code: CLOCK_WATCHDOG_TIMEOUT Does this pretty much verify that the crash is caused by hardware heat? Or is there still a chance the crash/error could be related to something else like insufficient memory or something bug-related to v2024 update? If so, what are some ways to troubleshoot what might be causing the error? System specs: CPU: i9-11900K @ 3.5GHz GPU: NVIDIA GeForce RTX 3080 RAM: 64GB File info: C4D version: 2024 Redshift version: 3.5.19 Dynamics Body Tag Cache: 1.10 GB Motion Blur enabled (transform and deformation) Link to scene file: https://www.dropbox.com/scl/fo/72askj4i2y0dv84sqvah9/h?rlkey=bpfrbt5cgtbnwp72xwjowzozd&dl=0 Preemptive THANKS for any reply! NpF
  6. @Hrvoje | @Zerosixtwosix Thank you both for your replies. I did a little more exploring on the topic, though I still have a ways to go. I was initially hoping to get to where I needed to go using the simple Voronoi > Figure setup, with one universal Connector, though I think this solution simplifies things to much and I won't be able to get a convincing looking "human being tossed from a motorbike" using only these components. I've attached a file that takes the advice both of you had: From @Zerosixtwosix "Connector - Display - Always visible" - this was a good tip, it showed where the Connectors appear to be disengaged between the "bones" and this was presumably dependent on the pivot angles from one bone to the next. I adjusted the transform pivots (but only the pivots, not the geo) on certain bones, (for example on the shin bones and the foot bones) where the rotations where profoundly off from the original t-pose. This did the trick, as far as preventing any body parts from detaching upon impact with Collider objects. The end effect though, looks a bit too rubbery - this likely has to do with the "Fixed" Connector type itself. I experimented with some of the other options (Ragdoll, Hinge, for exampel), but those all end up looking too floppy and/or chaotic, as the angle limits don't seem to be controllable per individual joints. BTW, you mentioned you "have a similar tutorial" - does this mean you created a tutorial on the topic of ragdoll/figure dynamics?? Or that you learned your setup from one that's similar to the one I mentioned? I LOVE your procedural tutorials on YouTube, would love to see more! From @Hrvoje" Took a quick look. In ragdooll test scene, you are using bullet. Use expert settings in project setting tab (bullet) and increase iterations" Here I found that adjusting global iteration settings did not really do much for keeping body parts from detaching (I think this had everything to do with Pivot angles between objects connected by the Connector as @Zerosixtwosix suggests), but adjusting the global physics settings made some difference in the how fast and far and at what angle the figure might bounce off of the impact Collider objects, which does effect to overall realism. In any event, I'm going to continue this exploration using a more customizable setup where Connectors can be individuated. These are tutorials I've found on the topic, though if you guys know of others, please let me know! https://www.youtube.com/watch?v=fLbosEhJyrI&list=PL_or3CGiooApC0hq4HHik1DQC1agDCTCv&index=2&t=1538s https://www.youtube.com/watch?v=zZOhit1BLO8&list=PL_or3CGiooApC0hq4HHik1DQC1agDCTCv&index=3 https://www.youtube.com/watch?v=raY4qe-aQe8&list=PL_or3CGiooApC0hq4HHik1DQC1agDCTCv&index=6 Thank you again! NpF ragdoll-test-mototbike_v002.c4d ragdoll-tests-v002.c4d
  7. @hrvoje Thanks - I will give that a try!
  8. Greetings all, I’m experimenting with some rag doll basics, following this guy’s tutorial: https://www.youtube.com/watch?v=l0HoOxRjg5k&list=PL_or3CGiooApC0hq4HHik1DQC1agDCTCv&index=1 Basic setup is Figure hierarchy-object that’s been made Editable and posed and then placed under a Voronoi Fracture operator, with an Inverse Normal distribution type, approximating a single distribution point per mesh object and Connectors setting set to “Fixed”. I have a basic fast-moving cube knocking the figure off a motorbike. (.c4d file attached) Though the legs and head remain attached when the figure lands, the arms, hands and feet become detached and fly all over the place. I would like to keep all body parts intact no matter what, though cannot find the settings that would allow this using this basic setup. I also notice that with an even more simple setup (Figure object not “made editable”), a simple drop test will yield the knees of the figure breaking away from the body integrity of the Figure. I’d love to know why these things happen. It would appear that if you hit the connectors with enough force, they will break rather than hinge. Without moving to more complex setups (custom Connectors, per body part for example) is there a setting that would let me dial in the “strength” of these Connectors so they never break? Many thanks! NpF ragdoll-tests.c4d ragdoll-test-mototbike.c4d
  9. @Hrvoje@vizn @Mash Thank you all for your responses! I found a solution, though took a little trial & error. The conundrum to me has been that since all the relevant image files were already in my project's /tex folder, it didn't make much sense that c4d should be throwing any errors at render-time, but it happened nonetheless.🤷‍♂️ @Mash - this was good insight - it's always good to know what's happening "under the hood" In any event, It seems the key was wrangling these "missing assets" (or unread paths, or whatever the underlying problem actually was) through the Redshift Asset Manager (vs. the Project Asset Inspector). In the end I simply re-linked assets in Redshift Asset Manager (shift-select all paths in the list, right-click, and choose "Relink Files") by pointing all paths to the /tex folder. The other solution, which was suggested to me by someone on Maxon/Redshift forum was to do hard "Save Project with Assets". This worked just as well, though truthfully I'm not a huge fan of creating a new project folder for every version of a project I save. Once again, thanks to all of you for your helpfulness! NpF
  10. Greetings all, I am continually getting a “missing textures” error when setting up a multi-Take series in the Render Queue window. The Render Queue’s Texture Error field lists all the supposed missing textures, though when I open up the Project Asset Inspector, it indicates that all of my textures are present and all texture paths are A-OK. As an extra precaution to make sure all my texture files are present and linked properly, I did the following: Consolidated all paths using the Project Asset Inspector Switched all paths to “local” using the PAI Switched all paths to “global” using the PAI Quit the project and re-opened it for a new session I did all these things but I am still getting a “Missing Textures” error when I set up my multi-Take rendering. A couple more things to note: I’m using Redshift as the main renderer All materials are RS Node materials I’ve used an offsite rendering service for similar multi-Take setup from the same file, and didn’t have any “Texture Error”s. Has anyone else had this problem?? If so, what was the solution? Preemptive THANKS for any reply! NpF
  11. @Zerosixtwosix Many thanks for your responsed! These solutions work well! (I only wish Maxon would address the buggy-ness of working with 'Color Layer' node - I have to refresh constantly when using it with Progressive Rendering to see the results that the material parameters are supposed to give me) In any event, I appreciate the quick and concise replies! Cheers NpF
  12. Hi all, There’s a recurring phenomenon that keeps coming up when I switch on the Texture mode button, to scale or move or rotate a texture using a Flat or Cubic projection. Sometimes the aspect ratio stays locked to a square, when I want to Scale the texture along one axis or another. I’m not sure why this is - I can’t seem to find the tab or setting that would turn this off. Jonas Pilz’s QuickTip video from R19 seems to suggest that it has to do with turning ‘Enable Axis’ mode on… … though it appears that once I have switched to Texture mode, the Enable Axis mode goes grey and therefore cannot be turned on or off. If I switch back to Model mode and then turn on Enable Axis mode, and then switch back to Texture mode, I can still scale the Texture gizmo along one axis or the other (it doesn’t stay locked). So what gives? What would cause the Texture mode gizmo to lock to a 1:1 proportion? (my apologies for not including a .c4d file that replicates the issue - it seems to pop up now and then but I don’t have an example saved where the Texture mode gizmo is locked) Preemptive thanks to any responders! NpF
  13. Hi all, I have a very simple Redshift material setup where I'm layering a few RS Ramp nodes via a RS Color Layer node to get some gradient effects in material's Opacity channel. (see attachments) What I'd like to do is create some setup that would allow me to slide and/or rotate each Ramp/Gradient around independently along the geometry's XY (or UV) coordinates. So for example , the circular gradient (RS Ramp 2) which is plugged into the Color Layer node's 2nd channel, is now totally centered the geo plane, and I'd like to push it up into the top left corner. On the same note, I'd like to take the diagonal gradient (RS Ramp 1) and rotate it 10 degrees clockwise. It seems the way to do this might be to add independent UV / texture channels to each Ramp node, and then somehow link these to a UV controller somehow in the viewport, though I have no idea how this is done. (I'm a longtime 3dsmax user, and such a thing is simple in that program as you can apply multiple UVW "gizmo" objects to any geo mesh object, assign each gizmo a UV id, apply corresponding UV id's in the object's material, and then transform the gizmos as you would any mesh object using Pos/Rot/Scale which transform the texture map accordingly.) Is there something analogous to that system in C4D? Or is there some other paradigm that would make something like this possible and fairly simple? Hoping someone can point me in the right direction. Many thanks! NpF redshift.ramp.node.c4d
  14. @Leo D No worries, and thanks for the clarification! The C4D Shader has tons of amazing features, especially when used with the MoGraph tools, though to your original point, I haven't found a ton of uses for it when doing straightforward photoreal modeling / rendering work. The brick & tile shader looks promising - was totally hoping the RS team would do something like this. I was a primarily 3dsmax guy for a long time before I learned Cinema. That program had something similar - parametric bricks & tiles - which they introduced probably 20 years ago and I used it often. Hopefully this has as many features and more... Thanks again for the response! N
  15. hi @bentraje Thanks for the response! Yeah, sounds about right! I posted the same question in the official Maxon/Redshift user forum. The only response I got from the tech dev was this link: https://redshift.maxon.net/topic/27941/c4d-upcoming-improvements-4 (the last update on this thread was in 2019, lol) The fact that he/she didn't write anything else makes me inclined to think he's sick of being asked this same question 😆 I haven't used any of the other renderers you mentioned, but it is a real head-scratcher that the Maxon devs won't push for more seamless RS integration. Just seems weird. Maybe it's low priority over all other new features, or maybe they have some big renderer overhaul planned in near-future versions. One can only hope and dream, LOL. In any event, I appreciate the response and commiseration. Best, NpF
  16. Hi @Leo D Thanks for the response! I'm a little confused by your wording. As far as I can tell, the Node Material UI won't support the C4D Shader node that IS available in ShaderGraph. Do you know of some work-around? I've read about using a "Reference" node in the Node interface, though I haven't figured out how that works. Have you? I'm also confused about you comment about "baking C4D Shader". By this you mean how a C4D Shader node gets "baked" when you plug it into a Texture node when using ShaderGraph? (Or are you referring to "baking textures" by way of UV maps as a way to avoid real-time lighting calculation? In that case, I don't think there's a particular node for that using either ShaderGraph or Node UI) In any event, I appreciate your response NpF
  17. Hi all, Can anyone say anything about the specific functional differences between creating Redshift materials using the ShaderGraph, versus using the newer Nodes UI ? Aside from the ShaderGraph nodes that carry the "C4D" prefix (C4D Shader, C4D Vertex Map, C4D Hair Attributes) are there any other ShaderGraph nodes that are unsupported by the RS Node material UI ?? In other words, if I'm creating a new scene to be rendered only in Redshift, is there any reason to use ShaderGraph-based materials other than if I need to use functions available in the C4D Shader node ? Also, does anyone know if Maxon has plans to update the Nodes UI to include ALL of those legacy ShaderGraph-based nodes? Preemptive thanks to any responsers, NpF
  18. Hi all, I have a simple Cloner grid setup (no Effectors) that's 'Blending' between two groups that have the same object types in their hierarchy. Obviously enough, each clone ends up being unique, presumably because the multiplier that 'Blend' mode uses, is linear, and derived from each clone's point number. I'd like to know if it's possible to somehow apply a range mapper on top of these values so that I have some control over how gradual the transition is (without adding a 3rd or 4th group to the stack of objects being cloned). Is the option possible using either an Xpresso setup or a Nodes setup? Or better yet, is it possible to achieve the effect with one or more Effectors (I suppose this would require the Effector to single out individual parameters within the objects being clones, and apply values to them based on Field values) ?? Any guidance would be hugely appreciated Many thanks! NpF
  19. Hello all in Core4DLand I’m hoping someone can give some advice with a would-be MoGraph rig - though I’m not 100% convinced this shouldn’t be done using particles instead of Cloners/Matrices. The effect I’m looking to achieve would be an animation made up of 7 stages (A - G) something like what’s in this screenshot below. (project file is also attached - hopefully is more self-explanatory than my lo-fi screenshot 😳) To clarify things a bit, these are my general intentions for each stage: A - Animation begins with a simple rectangular grid of clones (or particles) whose visibility ‘wipes on’ from one end of the grid to the other along one axis. I’d assume to achieve this with a simple moving PlainEffector effecting the Scale value of the clones. B - Once the grid is fully apparent, the clones rise up randomly along the world Y-axis to form a little random-looking point cloud / cluster. I’d assume to use a RandomEffector or Field effecting the Y-position C - Once the clones’ Y-positions are sufficiently randomized along Y, the cluster begins to move toward X+ along a drawn spline path. The movement here should be something like a delta pattern. The clones in the middle of the cluster move first up the path, and then are followed by the clones at the edges of the cluster, until all the clones are moving along the path. D - The cluster continues along the path. The randomness of the clustering continues, but spread of the cluster tightens up a bit so that the clones are closer to the line of the path, and the whole cluster looks somewhat long & thin, like a comet, where the head is a bit more dense than the tail. E - the path, which was a simple straight vector at the start, starts to gently curve sideways & downward toward Z- and Y- world axes. At the same time, the path begins to helix, and the clone ‘comet’ cluster follows the path of the helix. F - Once the cluster comes out of the helix / bend, the path straightens out again and a portion of the clones (maybe 20 of 300) have a cluster of target objects waiting for them (in this case, they are circles). While most of clones continue south along the path THROUGH the cluster of circles..., G - ... a few random clones hit their targets and ‘explode’. Each clone that hits a target will “explode” in 2d radial pattern of new clones, which form small-ish circles around their target points, a bit like fireworks. Something like this: https://media.giphy.com/media/wX9VpOwOzPQSIqdoC5/giphy.gif My first stab at this has me at using Inheritance Effector, to run a basic morph through each phase (except for the last phase where the clones are supposed to explode into circles). Results are about as wonky as I’d expected so I’m looking for other options. It’s particularly important that the cluster of clones follows the path in a way that looks consistent - meaning that each clone should generally keep a consistent distance from the centerline as it travels, and bank and rotate accordingly, rather than favoring a particular axis, the way a clone cluster tends to do with a SplineEffector. I’m wondering if I should be using a Deformer instead to achieve this. I’m also wondering if the whole rig wouldn’t be better served as a set of particle fx, rather than using Cloners and Matrix Objects. I’d also assume the technique for doing the ‘explosion’ stage toward end of animation will depend on how the previous stages are setup, so this can be of lesser priority for now. That said, I’d imagine we’d want to specify some point indexes with a manual selection or an Xpresso node, and then match point positions to the origin point of each target object - though again, I’d think the technique should not be determined until the initial setup and animation are working well. In any event, hoping someone out there has done something like this and can offer some pointers. C4D project file is attached! Thanks NpF mograph clone follow forum 02.c4d
  20. @Might and @Deck I thank you both very much for the feedback! I was able to figure things out with your help. It turns out all I really needed to do was turn off "Background" under the shading tab, and instead drop a Background Object into the scene with the color value set to pure white. @Might - as you said, there's no need for a second object, it just needed 2 materials per object: One being an S&T shader material, and the other a Standard material with an animated alpha shader The thing also to remember is to make sure the 'Transparency' option is ticked in the Options panel in Render Settings Thanks again! NpF
  21. @imashination Thanks for the reply & for waxing some wisdom on the topic. It was frankly something I never knew that as a general rule of thumb one should avoid fast camera movement past a subject with vertically oriented lines. At the end of the day, I found the most workable solution to be post-production blur (in this case RSMB in AfterEffects). Not quite photographically accurate, but close enough. As to your comments: "Sorry, Im not sure I can see where anyone suggested rendering at double the frame rate?" Since my baseline was 24fps, it seemed logical to me for this to mean that doubling original fps would make sense, (rather than setting framerate at some arbitrary number). That said, in general, non-film/tv-legacy-based frame-rates are more often than not out of budgetary range for most of my projects. "2) Use motion blur. I can see your renders do have motion blur, but the settings arent high enough for the movement. On this project you need a 180 degree shutter, or about 0.02 seconds (1 second, divided by 24fps, divided by 2 (for the 180 degree shutter angle)) of blur per frame in order for it to look natural. Less than this and it will flicker, higher than this and it will smudge." I appreciate your explaining this with a bit of mathematical formula. I'll definitely be bookmarking this for future use!!! Thanks again, NpF
  22. Hi all, I've been playing around with the Sketch and Toon shader for a project. While I love to options I can get with dialing in the line quality, there's a certain effect I'm going for that I can't seem to find in the UI options, either in the Material Editor or in the Render Settings. In the attached c4d file, I have 3 distinct objects (from back to front: cube, sphere & cone) I have 3 separate Sketch& Toon materials which are identical except for one parameter and that is the Animation parameter; each material assigned to its respective object but the way the linework animates on is offset for each object/material by 12 frames, so the S&T shader draws the cube first, then the sphere, and then the cone. What I'd like to also do is have the 'fill' of each object fade from 0% opacity to 100% opacity at the same start-and-end keyframes of the linework animation. In other words, as the black stroke lines appear for each object, the white fill fades up at the very same time, but we should still be able to see the contour lines of one object behind an other object, so long as the foreground object's 'fill' opacity is not at 100%. In this frame you can see how the silhouette of the sphere culls out the lines of the cube - I'd like to see the cube's lines not culled out, but rather still visible behind the sphere as the sphere's opacity is increasing over time, while at the same time the stroke gets drawn around the sphere. I'm wondering if this is acheivable using only materials and shaders - it would be great to NOT have to render a bunch of layers for composite in order to achieve the effect. Preemptive thanks to all responders, NpF sketch-n-toon-test.c4d
×
×
  • Create New...

Copyright Core 4D © 2023 Powered by Invision Community