Jump to content

pfistar

Registered Member
  • Posts

    115
  • Joined

  • Last visited

Everything posted by pfistar

  1. @KEPPN @IMASHINATION Thanks to both of you for your responses. I did some back-checking to see if there was indeed a frame-rate mismatch between my C4D output, and my editing app (in this case it was After Effects, which is what I usually use with any piece shorter than 3-4 minutes). In any event, I double checked my project settings and output settings in C4D - I’m certain everything was set to 24 fps at render time I’m not sure there’s any other parameter that would effect output framerate, besides that which is in the Project Settings panel and that which is in the Render Settings panel, but I’d love to know if I’ve missed something here It’s worth noting that my original output had some Redshift motion blur at a low sample setting I also double checked project settings, import preferences, and composition settings in AfterEffects Again, every setting was at 24fps at import time in every case All Comp settings and output settings were set to 24fps It might be worth mentioning that my output format has always been .mov Quicktime with H.264 compression, though this might be irrelevant I did try a re-import just in case, but result was the same as far as I can see It’s also worth mentioning that the jittering has always been quite pronounced whenever I’ve done RAM previews in AE In the case of both shots in question, I did use a slight linear time remap (96% in the case of shot01 (0:09) and about 90% for shot02. I re-rendered from AE after removing the time stretch, though it didn’t appear to make much difference in terms of reducing the jitter. @IMASHINATION - Taking your advice about rendering out of C4D at double frame rate. I’ve done a few tests - starting with re-rendering the scene at 48 fps, with a slight bump in motion blur sampling. I then imported this into AfterEffects with a 48 fps import setting, and brought it into Comp also set to 48 fps. I then rendered the clip out from AfterEffects as an .mov file at 24 fps. There’s still a little bit of jitter, though it’s less pronounced than before. I re-rendered using frame blending at the second setting, again at a 24fps .mov output and this made jitter even less pronounced. Finally, I did a third re-render, again at a 24fps .mov output using ReelSmart MotionBlur and that makes the jitter virtually undetectable! @IMASHINATION - Questions for you, since you appear to have some knowledge on the topic: Is this issue generally common with scenes that have lateral camera motion and a lot of parallel vertical elements? Why render at 2x FPS (in this case 48 fps) from my 3d program, if my final output (from AE) will ultimately be 24fps? Don’t half the frames get dropped anyway, or is there some kind of resampling that happens at render time in AE? Thanks again to both of you for the response. NpF
  2. Greetings everyone - I'm wondering if this is a common problem: In this video, between 00:09 and 00:14, and to a lesser degree, between 00:21 and 00:29, there is what appears to be some jitter or staggered movement as the camera pans across the subject. As far as I can tell, all the motion blur settings are turned on, but perhaps I forgot something? I've also been told it might be a frame-rate issue. In any event, I'm wondering if anyone else has had this problem and if so, how did you resolve it. Many thanks, NpF
  3. @cerbera though this looks pretty awesome, it's not quite what I had in mind. Per the image I attached, I'm shooting for something more simple: - a single thin cylinder of "light" (maybe a .5 cm in diameter) that looks like a 'volume effect' a la older CG rendering techniques - mostly semi-transparent, but more opaque toward the center-line of the beam, with an even falloff - some control over the falloff opacity along the length of the beam I'll look around further to see what I can find - thanks for the feedback thnx npf
  4. Hi all, There's an attached image from an old project using the C4D's standard renderer. The vertical light-beam effect here was achieved with a volumetric directional light. I'd like to replicate the look using Redshift. Since the volume rendering system works differently in Redshift, unfortunately it doesn't appear I can simply set up an RS directional light, limit its x,y boundaries and apply a volume (the way you can using a C4D standard light). So I'm wondering what some common workarounds might be. I did try an RS spotlight with a tiny-angle for the cone, though I'm not satisfied with the effect since it still looks conic. I suppose using geometry is the other way to go - I'd presume I can use a volumetric shader, as one might use to render clouds or murky liquid, though it would be nice to make the volume actually cast light too. I haven't found any great tutorials on this - does anyone out there have a suggestion?? Preemptive thanks to all responders! NpF
  5. @HappyPolygon Thanks for the response. I played around with Delay Effector, though not sure it was relevant to what I was trying to accomplish. However, I just had a "Duh" moment: It didn't occur to me that I could animate the Effector multiplier in the L1 Cloner, and that that would be a parameter like any other in the L1 Cloner, which could be effected by the L2 Effector. 🤦‍♂️ So this is exactly what I needed - a way to offset or layer the animation of one effector applied to a lower cloner, with another effector applied to a higher cloner. In any event, I do appreciate all your input - not sure I would have figured it out without having someone to bounce things off of 🙏🙏🙏
  6. Hi @HappyPolygon Thanks for the reply, and for the technical breakdown of the different object classes. You might have seen this post before under a different category - I moved it here to "Miscellaneous" because I didn't get any responders under the other category. In any event, what I'm trying to accomplish that is not visible in my example C... ...Is a similar effect to my example B... where a Plain Effector with a Linear field offsets the animation for each set of cloned clones, employing the Time parameter in the Effector. (so, only similar insofar as the top level clones get offset in time, but obviously the animation of the bottom-level clones is different, I hope this makes sense to you) Now, it's obvious enough that I'm not getting the time offsets I want in example C because the cubes we see whose positions are being randomized, is being animated by an Effector, and that animation cannot be effected by another Effector. So I'm wondering what the work-arounds for something like this might be. I did give a couple of alternative methods a try - there's a file attached that has 2 example techniques. A - here I'm distorting a few grid objects collected under a Connect generator - There's a Displacer deformer applied under the Connect generator that gives me the spatial randomization on the points - The sphere clones are created using 'Object / Vertex' mode in the L2 Cloner, using the L1 cloner as its template - As in earlier examples, the L2 Cloner has a Plain Effector applied, with an offset in the Time parameter and that gives us the ramp-up of animation from one clone to the next B - here I'm using an identical Displacer as in A for the randomization - instead of geo primitives, I'm apply the Displacer to a Matrix grid, which is the cloned using Cloner.L1 - As in A, the L2 Cloner has a Plain Effector applied, with an offset in the Time parameter, which gives me the ramp up I want - I would love to be able to use the L2 cloner to assign spheres to each individual node in this area of cloned Matrices in order to get a similar visual result as seen in example A, but it appears C4D won't allow it. i'm not sure why this is. but I'd have to guess that the transform data of the nodes in the Matrix gets lost somehow when you run it through a Cloner, and then try to run that though a second cloner. If this is the case, I'd wonder if you'd happen to know a way to convert that Matrix node data so that the node points in that cloned array of Matrices can be read as geometry points by the L2 cloner? Aside from the above, I'd still wonder if there's a way to effect the transform-based numeric values of one Effector with the time-based numeric values of another Effector that's been applied to a clone array, in order to achieve similar results to my new examples A & B. Do you imagine an Xpresso setup could even make that happen? Thanks again for your response! NpF cloner-trigger-animation-test-alt_v00.c4d
  7. Greetings all I am trying to create a setup which would allow me to use the Time offset parameter in a Plain Effector (Plain (L2)) to trigger the animation of a Field/Effector (L1) that’s driving the animation of a clone array(L1) that’s a child of an above clone array (L2). To clarify further (see screenshots): A - the Cube being cloned has its size and fillet parameters animated - no animation on Cloner.L1’s parameters - Cloner.L2’s effector has it’s Time Offset parameter in use, so the Cube’s parameters ramp up according to Plain Effector’s Field B - no animation on the Cube being cloned but - Cloner.L1’s Count parameter is animated - Cloner.L2’s effector has it’s Time Offset parameter in use, so Cloner.1’s count parameters ramp up according to Plain Effector’s Field C - no animation on the Cube being cloned but - Cloner.L1 has a RandomEffector applied, and Linear Field’s Position value is animated to randomize the clones over the course of few seconds - Cloner.L2’s effector has it’s Time Offset parameter in use, but the Cloner.L1’s Effector does not respond to Cloner.L2’s Effector, so I’ll conclude that those animation values in Cloner.L1’s Effector are not being applied to Cloner.L2’s value. Is there a work-around to this? Perhaps I haven’t “turned over every stone” of the available parameters? Perhaps there’s an Xpresso setup which would make it possible to use the Time offset values in the L2 Effector on the Position value of the L1 Effector’s field Pre-emptive thanks to any responders! NpF cloner-trigger-animation.c4d
  8. I suppose that does indeed work - thanks
  9. @HappyPolygon I appreciate the reply - I guess we're asking for too much 😆😆 I had come up earlier with a more brute-force solution (using Random mode in the top Cloner with multiple Null objects included in the objects being cloned - see attachment). Though drawback here is that the only parametric control is the seed value in Cloner object's parameter. The idea, as is with a Point Selection as template, is to keep the overhead low by avoiding using an Effector with randomness applied to Scale value, where C4D has to calculate even invisible geometry. Ah well - here's to hoping someone else can chime in. Thanks again, NpF procedural-random-gridpoints-v05.c4d
  10. @HappyPolygon Many thanks for the response! - I gave your solution a try. It does work insofar as the point count increases with the use of the Correction Deformer, and points now are distributed across the whole array of grids. However, it doesn't appear that the Field that's driving the randomness does anything when I adjust its parameters. It's as if the point info in the Selection Tag is pretty much frozen. I tried making adjustments to the Random Field and then hitting 'Update' in the tag, but that doesn't seem to do anything. I would imagine that Field's parameters would be the determinant of the Selection Tag's number of points, though that doesn't seem to be the case, once the Correction Deformer's been put to use. Does the deformer 'bake' the point index? (I have no idea what's going on under the hood here. 😳) Thanks again for the suggestion! NpF
  11. Greetings all, I’m looking for a way to access a Point Selection on an array of Cloned objects. The idea is to get a random selection of those points to use as the template in another Cloner object. My setup here shows an array of cloned grids which are (presumably) bound together under a Connect object. I set the ‘hero’ cloner to Object mode, and use the Connect object as its vertex template I then use a Point Selection tag, driven by a Random Field, which was copy-pasted from another object The result is not quite what I’m expecting: I’m only getting clone distribution on the 1st of 25 of the grids in the Connect object, where I’d like to see the clones randomly distributed across all 25 of the grids, using the parameters of the Random Field Is there any way to access ALL the points in that 25 grid array using something like the Point Selection tag? Ideally I’d like to keep this setup procedural all the way through, which is why I haven’t collapsed the cloned grids into editable mesh objects. procedural-random-gridpoints-v02.c4d
  12. Greetings all I am trying to create a setup which would allow me to use the Time offset parameter in a Plain Effector (Plain (L2)) to trigger the animation of a Field/Effector (L1) that’s driving the animation of a clone array(L1) that’s a child of an above clone array (L2). To clarify further (see screenshots): A - the Cube being cloned has its size and fillet parameters animated - no animation on Cloner.L1’s parameters - Cloner.L2’s effector has it’s Time Offset parameter in use, so the Cube’s parameters ramp up according to Plain Effector’s Field B - no animation on the Cube being cloned but - Cloner.L1’s Count parameter is animated - Cloner.L2’s effector has it’s Time Offset parameter in use, so Cloner.1’s count parameters ramp up according to Plain Effector’s Field C - no animation on the Cube being cloned but - Cloner.L1 has a RandomEffector applied, and Linear Field’s Position value is animated to randomize the clones over the course of few seconds - Cloner.L2’s effector has it’s Time Offset parameter in use, but the Cloner.L1’s Effector does not respond to Cloner.L2’s Effector, so I’ll conclude that those animation values in Cloner.L1’s Effector are not being applied to Cloner.L2’s value. Is there a work-around to this? Perhaps I haven’t “turned over every stone” of the available parameters? Perhaps there’s an Xpresso setup which would make it possible to use the Time offset values in the L2 Effector on the Position value of the L1 Effector’s field Pre-emptive thanks to any responders! NpF cloner-trigger-animation.c4d
  13. As for using a Formula Field. I love this! (maybe one day the syntax will make more inherent sense to me ) 😳 However, @Havealotwhen I try to replicate what you've done in the scene you provided, there's something specific about how you've set the Fields list up that I can't seem to replicate. It seems your Formula Field is not pulled up from the Field list within the Plain Effector, but rather dropped in from somewhere else. Can you tell me how you achieved this? Many thanks again to both of you! @deck @Havealot NpF
  14. @deck @havealot Thanks to both of you for the response and suggestions. Looks like you both might be right. In the case of using a PlainEffector, a single Effector won't work if the clone array's points are all at the same xyz. However, giving them slight offset on z-axis will now allow you to make the scale offset by moving a Linear Field across those Cloner points along z-axis, but then your problem now is that the circles are spaced slightly across z-axis. Not ideal, but remedied with Step Effector which can push the initial offset effect from the PlainEffector backwards so the circles are all at (or very near) z=0. A little bit hack-y but it works. (see attached) circle-offset_plain-effector.c4d
  15. Hi all, This feels like a noob question but I'm trying to achieve exactly this effect (gif below), but with a MoGraph Cloner & Effector. Intuitively it seems like a job for the Step Effector, but I can't seem to make it happen despite trying a number of configurations. This set up is just, brute force keyframe animation - would love to know a more procedural solution. Many thanks! NpF :
  16. Hi All, I'm just starting to use r25 on projects and have been frustrated by mouse navigation in the timeline window. In this case I'm working on a remote Windows machine virtually, and I have reason to believe my problem is related to this, as I don't have the same problem when working on a non-remote machine. My general issue is that when I use Alt+MMB or Alt+RMB to zoom and pan, the zoom and pan super-drastic in both cases, making framing the range I want to work with frustrating hard to dial in. Has anyone out there had this issue - particularly on a remote machine? If so, have you been able to resolve the issue any way?? Many thanks! NpF
  17. Greetings all, I just bought a a new system that's been very crash-prone during Cinema4D renders. I haven't even put it to the test yet in a production situation - I have just been running test renders of old projects to see how the system performs. So far, I've tested three different scenes, one using c4d Standard renderer, and the other two using Redshift. All 3 scenes have crashed or freezed-up C4D during the rendering of a sequence. Even worse, one of the scenes crashed the whole system and Windows did an auto-restart. Basic system specs: Processor: Intel Core i9-11900K Motherboard: ASUS Prime Z590-P Storage: Seagate FireCuda 520 | 2TB Samsung 860 PRO Graphics Card(s): 1x GeForce RTX 3080 10GB (VR Ready) System Memory: 32GB DDR4 3200MHz First action I took was to make sure all relevant software was updated properly (C4D, Redshift, NVidia drivers). Still crashing. I'm at a loss for how to continuing trouble-shooting. Are there steps I can take to parse whether it's a hardware issue or a software bug. As mentioned, these crashing scenes were created on an older machine with lesser specs, but I never had crash-issues with any of them. Preemptive thanks to any responders, NpF
  18. I was able to figure this one out - the requirements being: Spline-based 'worm' animated with Displacer deformer Cloner 'legs' running along the spine of the worm - the 'legs' need to stick to the normal direction of the worm-segments as it wiggles 'worm' body needs to have a consistent round cross-section and rounded caps at each end. @noseman 's great Edge To Spline plugin (free!) opens up some options. Many thanks @noseman !!!
  19. Greetings all, I have a thing in mind to do, which is to: run some cloned ‘legs’ along the length of a ‘worm’ which is animated by way of a Displacer deformer object Have the ‘legs’ rotate according to how the worm bends and wiggles, or to put it another way, have the ‘legs’ inherit the vector normal direction of the points or polygons they are cloned onto. At the same time, have the body of the worm be a cylinder with capped ends. Ideally this is accomplished using the Sweep tool, so that the deformer animation can be applied to the spline object, thus letting the width of the body of the ‘worm’ stay consistent while it wiggles around. Below is a screenshot showing several setups, all of which fall short of ideal in some way or another. I’ve also attached an animated file which should explain things better, if anyone wants to look at it. I’ll work backwards from D to A to explain each one. D - Here, we have a Spline.D animated with Displacer object and in it a Noise map. Spline.D is the template for the Cloner object set to ‘Vertex’ distribution mode, and that same spline is also used to generate the Sweep object. The problem here is that when the Cloner is set to ‘Align Clone’, the ‘legs’ jump around, between certain frames, as it looks like the Spline’s point vectors don’t behave the same way a mesh object’s point vectors would when deformation is applied with Displacer. If I turn off ‘Align Clone’, the clones no longer jump around, but then they’re bound to being parallel to one another, which is not the effect I want. I’m wondering if there’s a way to actually see the spline’s point vectors in viewport, and also a way the make them behave in a way similar to a mesh object (?) C - Here, I’ve added the Rail option to the previous Cloner setup. The Rail object is an Instance of the base spline (Spline.C) with its position slightly offset. The Rail lets me control the general direction, but the vectors, and hence the clones, all still run parallel, which again, is not what I’m after. I’m wondering if there’s a way to offset the Instanced spline while it’s animating so that the points are offset according to their vector normals (like the ‘Move Along Normal’ modeling tool) (?) B - Here, I’ve introduced a Plane object which is simply a strip of polygons that is now being used to distribute the clones (via Polygon Center distribution). The Plane object has the same about of polygons as the spline has edge segments, and is following the animation of the spline via Spline Wrap deformer. Playing with ‘Up Vector’ and ‘Banking’ settings in the Spline Wrap deformer gives me an approximation of what I’m looking for, but I’m noticing that the Sweep object and the clones distribution object (the Plane) don’t quite move along the same vectors, so you get a bit of a ‘sliding’ effect. Not terrible, but not ideal. A - Finally, instead of using a spline as my distribution object, I’m having the Displacer act directly on the Plane object, whose effect I like a lot. Wondering if there’s a relatively easy way to use the Plane object’s geometry to generate the ‘worm’ body, so that it follows the animation while maintaining a consistent width / radius. I appreciate any feedback - preemptive thanks to all responders NpF controlling-normal-vectors-on-spline.c4d
  20. @bentraje Thanks for the response, and confirming that I'm not crazy 😄 I definitely share your sentiment regarding integration, as I'm sure many others do as well - Maxon acquisitioned Redshift in early 2019. What are they waiting for? Regarding the term 'playblast', I guess I assumed it was a more universal term for 'a viewport animation rendered to a movie file' - by 'universal' I mean software-agnostic. I was never much more than an occasional Maya user, but it sounds like this is the official term Autodesk uses for viewport renders so it's become something of an industry term in any studio I've worked at. Apologies for any confusion about that. Cheers Nik
  21. Hi @bentraje Thanks for the reply. I'm not sure I follow you - The way I have things setup, 'Shift+R' is the command that tells C4D to "invoke rendering" (essentially the 'render' button), which will automatically pull up the Picture Viewer window and show the frame buffer as rendering is happening frame by frame. 'Shift+R' will also of course invoke whatever the active render setting happens to be, as determined in the Render Settings window (Crtl+B), and C4D will render according to those settings. Now, once I'm in the Render Settings (Ctrl+B) window, I know that switching to "Hardware/Software Render" as you say, tells C4D to render whatever's showing in the viewport for the active camera. However, what I'm saying here is that this is only true as long as the shaders on my geometry are Standard c4d shaders. If they happen to be Redshift shaders (which is what I use most of the time these days), c4d's Hardware/Software renderer will render them as solid black (screenshot #2, above), even though they have lighting/shading on them in my active viewport (screenshot #1, above). In other words, if all my scene objects have Redshift materials, my 'playblasts' will render all objects solid black, no shading or lighting, which makes them largely useless. I've been using a clumsy workaround which is that I set up a separate 'playblast' sub-Take for all my cameras, and for those Takes, I swap out my Redshift materials with Standard c4d materials. By doing this, c4d's Viewport Renderer, will render things just as they're seen in my viewport. This is generally not too disruptive to workflow, but can becomes so if my scene happens to have a lot of materials. In any event, I'm wondering if I've missed some setting somewhere that would let me render exactly what I see in my viewports when I have Redshift materials assigned to my objects. Thanks again! NpF
  22. Hi @Igor What is a playblast?: A Playblast is a quick preview that lets you make a "sketch" of your animation, providing a realistic idea of your final render result without requiring the time needed for a formal render. "Playblast" basically just means view "viewport render", (or "GPU render" if you're old enough to remember a time before GPU raytrace engines were commonplace.)
  23. Hi all, I'm a big fan of viewport preview playblasts (and so are most of my clients). Am I missing something, or is there simply no way to create proper viewport renders once you're using RS materials and lights? To be clear, I can create OpenGL output file sequences using the "Make Preview" command from the Animate menu, as well from the Render Settings panel. However, neither way actually renders the lighting and shading that I see in my viewport, but rather all shaders render as solid black. What I see in my viewport when in Node Space: Redshift: What I see when I switch to Node Space: Standard Physical and what I get however I try to render using the viewport's renderer: Anyone have any remedies? Big thanks! NpF
  24. Greetings all, I've hit a little snag using the MoGraph Multishader, and I'm a little unclear on how it actually works. What it seems like it should do, or is doing, is that it assigns a particular shader to its corresponding clone by its index number. However, this doesn't seem to apply in the case where I'm using the Grid Array mode vs. Linear mode, even though the index numbers show the same range. It appears when using Grid Array mode, the Multishader can only iterate on clones along the x-axis of the array (see attachments). Is there any way to override this using an Effector or some other means, so that the Multishader's index corresponds with the Cloner's index #s, and the shader properties (in this case, the diffuse color) are actually transferred. Thanks ahead of time to any responders. NpF mograph-multishader.c4d
×
×
  • Create New...

Copyright Core 4D © 2024 Powered by Invision Community