Jump to content

NWoolridge

Registered Member
  • Posts

    61
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by NWoolridge

  1. What I think is the major unsung capability, which has been building over the last couple of versions, is the procedural power brought by nodes and capsules. Due to its nature it is not upfront, but the new workflow possibilities afforded by object manager stacking of capsules, and new nodal solutions, is pretty amazing. We need more sample files and tutorials to see this aspect brought forward.
  2. C4D 2024: https://www.maxon.net/en/article/maxon-one-fall-release-includes-new-features-and-massive-performance-improvements
  3. Apparently it is an architectural constraint of redshift that it can have only one render process active at one time. Were you using a different render engine with your iMac Pro?
  4. Ah, so there is! I stand corrected. But as you say the compromise is so significant it is probably not worth trying, and there seems to be issues with previewing the sim....
  5. Fluid simulations of any complexity are RAM hungry, in general. I think BLASTERCASTG is just dealing with an older GPU with some RAM limitations. Embergen is its own thing, and I'm sure many would want to know what their secret sauce is. It is also single purpose piece of software, and does not have to account for an existing object system, compatibility with other tools like cloth, mograph, etc. Pyro does have to contend with these things, and it creates constraints that Embergen doesn't have to deal with. Pyro is currently an initial release, and I imagine that other optimizations are possible or planned (e.g. nanoVDBs). This is essentially what raising the voxel size does: it lowers the res of the simulation. The Mac OS issue of GPU performance being optimal at integer screen scaling is a general issue; I just mentioned it because people are often unaware that this seemingly innocuous setting can have a major effect on GPU render and simulation performance. Not that all is lost on MacOS: the unified memory architecture of recent Macs means that if you have e.g. 64-128 GB of RAM, almost all of that is available for simulation, and there's no need to shuttle sim data from the GPU to the CPU RAM pool for rendering or caching. It means that Macs with lots of RAM can generally have an easier time with larger sims, and have performance gains where discreet GPUs would be shuttling data back and forth.
  6. I'll also add: on Macs, the screen scaling has a surprisingly large effect on GPU performance. Say you have a 4K screen. There will be ~2 default resolutions: a retina standard scaling one (where 1 virtual pixel is made up of 4 screen pixels), and a native 4k one (where the pixels are 1-1). You can choose other scaling, but a note appears: This appears because the GPU now needs to do non-integer scaling of the virtual display to the actual display, 60 times a second (or more). This usually has little obvious effect, but running a pyro sim in a 3D app while forcing non-integer scaling is almost a perfect storm of GPU resource contention. If you are using a non-default scaling, try restoring the defaults and see if it makes a difference.
  7. Currently Pyro in C4D is a purely GPU technology; there is no CPU fallback. So, GPU performance is the main driver of pyro performance. It is also very RAM hungry, so having only 8GB of VRAM, or sharing the system RAM of a 16GB M1, will limit the scale of the effects you can achieve. The main way to increase performance (without upgrading your machine/GPU) is to simplify the sim by increasing voxel size. This can be done in two places: the pyro object/simulation settings, and in the pyro tag. The pyro object/sim settings is the most impactful one, since that sets the overall memory footprint. You can also try reducing the padding, which will reduce the number of voxels in the simulation (though if you have explosive effects or fast moving objects, they may "clip" outside the sim at points). Pyro runs very well with good framerates on M1-based macs that have more GPU cores (e.g. M1 Max and M1 Ultra)
  8. Hi Steve, This is fantastic, and thank you so much for working on this and making it available to the community! The existing functionality is great, but I'm wondering if you are thinking of extending its capability toward biological molecules? I realize that that is a big ask, but it would be an awesome help for the medical/bio visualization part of the C4D community, which is very active... Perhaps simpler changes might include: - adding a "spacefilling (CPK)" preset, which would probably entail: - adding a "van der Waals radius" setting for Atom size mode - adding an option to remove hydrogen atoms (especially useful for larger biomolecules) If there was a spacefilling preset, it would be relatively easy to get a good surface representation with C4D's volume tools. The "dream list" for biomolecule visualization would include: - adding the ability to import standard biomolecule formats (.pdb, etc) - adding further representations like: protein backbone; ribbon diagram; solvent surface, etc. I know this is way to much to ask, but your plugin is exciting! Best, Nick
  9. Prior to RS version 3.5, the "uber material" in RS was the "RS Material". That is still the one that is created by default with the plus button in R26.014 (and RS 3.5). With RS 3.5, the new "Standard" material was introduced, which has new features and is much simpler to set up and use. It is based on the Arnold/Autodesk Standard Surface shader. The new "Standard" material is a vast improvement (to me anyway, and anyone familiar with the same thing in Arnold and some other engines that are adopting it). But since it is so new, Maxon/RS haven't yet made it the default. Some bugs are still being uncovered (e.g. this thread on the RS forums: https://redshift.maxon.net/topic/42067/transmission-scatter/27).
  10. They haven't made the standard material the default yet, as it is so new, but that is expected to come at some point...
  11. I think they really should have their own forum, but for some reason it hasn't happened yet...
  12. Yes, the bendiness default is not good. To get properly flowing and folding fabric, feel free to push it up to 100.
  13. This is incredibly short sighted. Implementing ZRemesher as a modifier that can make it part of a procedural stack opens up many workflows. Just a couple of examples: Generating appropriately tessellated procedural text (which is still editable) templates for dynamics sims Creating multi-resolution meshes for cloth sims which can be quickly adjusted to optimize simulation quality and results It can still be used in a non-procedural way if you want. And Zremesher does not work well above poly counts of a million or so, so stacking it with a poly reduction modifier, etc., would be an incredibly flexible workflow to dial in an appropriate mesh without the PIA of the command-undo loop...
  14. I actually agree that the default settings are not optimal. For instance, the bendiness parameter should probably 10 by default, and can usefully be set to 100. This will reduce the "shape memory" of the cloth surface. Also, like all XPBD-based solutions, the stiffness of the material increases with substeps/iterations, so there needs to be an understanding that as you increase substeps to deal with problematic intersections, you may need to increase stretchiness/bendiness to maintain the fabric feel. It's perfectly possible to get nicely realistic cloth sims. As the system evolves I hope they will develop better tools for dealing with things like entrapment of cloth surfaces when colliding surfaces overlap...
  15. How about this? https://www.dropbox.com/s/6cgx9mmo0717058/Test_pillow.mp4?dl=0 It helps to actually understand the system before criticizing it...
  16. Here's a cloth test I made with the new engine... https://www.dropbox.com/s/7h6x4yuu5jf3l3g/folding rotation 5.mp4?dl=0 The is a simple test of affecting a cloth object with a couple of rotational forces. One thing I learned on the beta: to get realistic cloth effects, don't be afraid to push the bendiness very high. The default is very low and results in very stiff and plastic looking fabric. Go to 10, 50, or even 100.
  17. If it is like past releases, it will install by default in a new R26 directory.
  18. You seem to be conflating two roles: quality assurance, and beta testing. Quality assurance are paid roles within the company to find bugs and regressions, validate functionality, and run through test suites. Beta testing is unpaid, and involves hopefully representative professional users who agree to donate time using the software in more real world settings, reporting bugs, and communicating their interests and priorities.
  19. No one would use their fingers to use Forger (or at least they wouldn't use it successfully). Forger (and Nomad Sculpt, the other major iPad sculpting app) work best with the Apple pencil, so that you have pressure sensitive sculpting. This is no different than using ZBrush with a Cintiq: a directly interactive sculpting process for (especially) organic models (e.g. character and creature designs). The disadvantage of using an iPad for this is that Forger and nomad don't come with the full feature set of something like ZBrush. But there is also a huge advantage: comfortable, mobile , performant sculpting on a tablet with a long battery life and which doesn't get hot. The work can always be elaborated/refined on a computer later. It depends upon your needs whether this makes it useful for you.
  20. You are one of the few in the world that would characterize these SOCs as mediocre. Remember, these SOCs are in laptops, and designed to sip power. So far, the M1 Max is generally quartering the time required for GPU compute tasks in benchmarks for Octane and Redshift (Compared to the M1). This means that they scale well, and puts them in range of a GTX 1080Ti (for GPU rendering). If the rumoured 64- and 128-core variants appear next year (in e.g. the Mac mini, iMac or Mac pro), this family of SOCs will be meeting or exceeding the best coming from individual discrete GPUs. What's truly interesting are the early results showing their advantages in large scenes, because of the unified memory architecture, bandwidth, and system memory pool. Users have rendered the Disney Moana scene in Redshift in ~26 minutes, versus 46 minutes on a GTX 8000. If these results hold up, it will demonstrate the value of a different path forward for system design...
  21. Our academic program equipped a lab with those "trashcans" in 2014, and they were incredibly reliable performers. We have since equipped that lab with Threadripper PCs, which are great. But when COVID hit last year we were able to loan out the "trashcans" to students who were able to complete their animation projects largely at home, and remote into the lab to take advantage of the threadrippers for extra rendering potential. So, I don't hold to the hatred that those machines get. They were clearly an evolutionary dead-end, but they were well-built and reliable. The notion that the M1 will be "tuned for blender" is bizarre, frankly. The opposite is true: with this investment, Blender will be tuned for the M1, the same way that Octane and Redshift are now tuned for the M1. The groundwork that Apple laid with developer support for Redshift and OTOY will now pay off in the adaptations of Cycles to the Metal framework. You could actually make the case that the M1 is tuned for the workloads that Apple likes to think are the province of their Pro customers; this is evidenced by the silicon support for ML, and advanced video encode/decode (including ProRes).
  22. Truly odd, and somewhat heartening, to see nVidia and Apple collaborating on something (the physics extensions to USD...).
  23. It's worth noting that this approach creates a single long spline. You can simulate multiple segments by adding a material with a multi-colour step-type gradient in the V direction.
  24. I've adapted Mododo's brilliant solution and modified it to properly fill a volume and to get some more randomness. Mododo's key insight is setting the matrix to reference the seed spline on the first frame, and then switching it to reference the tracer on the second frame. Since the tracer references the matrix, this sets up a feedback loop that extends the length of the tracer-generated spline each frame. You would think that this wouldn't work (since it is has two objects referring to each other for their basic structure: a paradox!), but it takes advantage of C4D's execution order and effectively ratchet's up the line complexity over time. Crucial settings are: the push part effector's distance setting, the falloff in the push apart effector (which being set to volume allows you to fill a volume based on a proxy object), the random effectors intensity and scale settings, and the Matrix object's step setting. You can play with these settings to vary the look, but beware that some settings (e.g. lowering the step setting too much) will quickly bog C4D with the masses of generated geometry. Volume filling spline tube (chromatin).c4d
×
×
  • Create New...

Copyright Core 4D © 2023 Powered by Invision Community