Leaderboard
Popular Content
Showing content with the highest reputation since 04/03/2025 in Posts
-
There are some very interesting new features. Particles have an extensive list of new functionalities from which the following stand out. The Field-driven density distribution of particles is highly appreciated along with in-built noise shaders for both distribution and emission. Density control shaders and Fields is something I've been asking for MoGraph for years. Finally it can be achieved through particles. Noise Sampling for Color and Data Mapper https://help.maxon.net/c4d/en-us/Content/Resources/Images/2025-2_Particles_NoiseSampling_01.mp4 https://help.maxon.net/c4d/en-us/Content/Resources/Images/2025-2_Particles_NoiseSampling_02.mp4 Neighbor Search algorithm for Flock, Predator Prey and Blending Similar to the geometry density coloring effect I've been asking. Some Scene Node Capsules are now directly available with other primitives from drop-down menus. This was something I was actively arguing about from early versions of Scene Nodes as it was a closely UX-related issue. Things were pointing to parallel ecosystems of the same tools being developed under the same application or tools that were not that easily accessible that would inevitably lead to confusion or frustration among new and old users. It wasn't too long ago you could finally make capsules accessible from custom pallets but that required from users to know how to do that and as SceneNodes was so actively characterized as experimental or system for advanced users, people would prefer staying away from it missing some key features that were not that hard to use after all. Personally I extend my custom Layout with capsules supported by the OM every new version. Now the Line Spline is available along with the Break Spline, Branch Spline (which is an easier MoSpline Turtle), Catenary Spline, Dash Spline, Electric Spline, Partition and Pulse Spline. What I don't know is if they re-developed those tools in C++ or if they are just links to the node implementation with modernized icons to fit with the rest of UI. Well, there are still some argument about what should be characterized as a Generator what as a Modifier and what as a Deformer... For example the Branch Spline is more fitted to the Generators club as it does create additional splines on top of the original but I guess they run out of ideas on how to represent it with a new icon to avoid confusion with the MoSpline... Which leads to other old arrangement arguments like why don't we just have new features part of older tools as modes... For example the Branch Spline as Mode of the MoSpline... Break, Dash, Partition and Pulse modes of a single spline deformer/modifier or Wrap, Shrink Wrap and Spherify as modes of a single deformer... Looking forward to seeing all distribution nodes as modes in the Cloner and Blue Noise as a mode of the Push Apart Effector. New mode for the Look at Camera expression I always thought this to be an old remnant from earlier C4D versions... I never used it because I achieved the exact same effect using the Target tag... I still don't know what the difference between them is... Some unfortunate translation issues: Weird title, not available in English (fixed the next day) https://help.maxon.net/c4d/en-us/Default.htm#html/OFPBLEND-FP_BLEND_OBJECT_OUTPUT.html MAXON ONE capsules are not documented... Bad marketing as other users don't know what they are missing Things that we saw in the teaser video but they are not documented (yet) Constellation Generator (Plexus effect) Liquid Simulator (yeah... the thing that should be teased the most was not)4 points
-
I think you may have this backwards. In plain language: Straight alpha: Stores the full RGB colour for each pixel, but ignores how transparent it may or may not be. ie the transparency of a pixel has no impact on the colour stored. This means for example if you render a white cloud with a soft whispy edge in a blue sky, the rendered cloud will only contain white cloud colours, the blue of the sky will not be present in the rendered cloud, even where the alpha transparency eats into the cloud. Premultiplied: This simply means the image being rendered has already had the background baked into the rendered colour data. In the cloud example it means that it will start to turn blue as the edge of the cloud becomes more transparent. In practical terms, straight alphas can be great because there's no bleeding of the background into the visual RGB data, you can take your white cloud and throw it onto any background you like, there wont be any blue from the sky creeping in. On the other hand If you place your premultiplied cloud onto an orange sunset background, youll get a blue halo around the cloud, which sucks. However.... It isn't all roses. Sometimes you need the background colour to be baked into the transparent edge because some things are just flat out impossible to render due to the number of layers present or the motion on screen. Here's one which screws me over regularly; what happens if I have a 100% transparent plastic fan blade, but the fan blade is frosted. And in the middle of the fan is a bright red light. Visually the fan blade has the effect of looking like a swooshing darth vader light sabre. Its bright red and burning white from the brightness, but whats there? a 100% transparent object.... The alpha channel with a straight alpha will obliterate my rendering, its 100% transparent plastic. You can see it, but the alpha channel decides to ruin your day and now the rendering is useless. The only option here is a premultiplied alpha where the background is baked into the motion blur and SSS of the plastic fan blade. Sure, I need to make sure my 3d background somewhat matches my intended compositing background, but its the only way to get any sort of useful render. Same goes for motion blur, DOF blur, multiple transparent layers in front of each other (steam behind glass) The honest answer is, use whichever one is least likely to screw you over. If you have lots of annoying transparent/blurry things to deal with, go premultiplied but plan your background ahead of time. If you want clean alphas on a simpler object render, go straight alpha. I haven't read your linked blog all the way through, but I will say... there are an abundance of wrong people loudly proclaiming themselves to be fonts of all knowledge. Theres one in the octane community who insists on inserting himself into literally every thread on the entire octane forum to tell you youre an idiot for using a png file, he has 100's of blog pages which are a strange mix between 3d rendering and flat earth magical woo woo maths to show everyone just how right he is. That said, your rainbow example does match up with what the blog says. The only difference is the blog seems to think the straight alpha is evil and you should only use the premultiplied, whilst I would say both have their uses, with straight being preferable when possible.3 points
-
Hey Jeff - Maxon is recording the NAB presentations and plans to post them 1-2 weeks after the show.2 points
-
There are, if we look carefully, a number of things subtly wrong with the settings in this scene. As HP predicts above, Render Perfect on the sphere is one of them, and fixes the effect of collision deformer not showing in render if we turn that off. Next we had some suspect settings in the sweep object (parallel movement should be off, Banking should be ON, and End Rotation should be 0 degrees), which fixes the twist in your path spline over the indent, and restores it to correct, contiguous circular form all the way along its length... The mode of the Collision Deformer should be Outside (volume) and the capsule should ideally not be a child of it, though it doesn't particularly matter in this scene. Likewise object order in the OM could be better bearing in mind that Cinema scans downwards through it from the top. Below is more ideal I would say, and general good practice. Next, the curvature in your path spline within the sweep isn't quite matching the curve of the indent in the sphere, resulting in some unevenness in that area. This can be fixed by getting a new path spline for the sweep, which we get by doing Current State to Object on the main sphere (once collided) and then Edge to spline on the centre edge loop of that, but first we have to fix the collision object, which should also ideally be a sphere so that we can choose Hexa type and thereby avoid the complex pole on the end of the capsule it replaces, which was previously confusing the collision deformer, and producing some wanky / uneven collision vertices in the main sphere at the apex of the collision point, which would obviously effect any spline you subsequently derived from it. In my version I wanted a better, higher resolution sphere, but not one that caused the collision deformer any extra work, so changed the type of that sphere to Hexa as well, which then necessitated addition of a Spherify deformer before the Collision Deformer (because hexaspheres are not mathematically spherical out of the gate). And then lastly that went under SDS to give us improved resolution, and my Current State to Object was performed on the SDS instead of the sphere itself, for maximum spline matching. Anyway, I have fixed all that in the scene attached below... groove-on-sphere CBR Fix.c4d CBR2 points
-
Lumion 2025.0 Lumion 2025.0 adds a new AI-based image upscaling system, aimed at users rendering on lower-end machines. Activating AI Upscale causes Lumion to render at half the target resolution, then use AI techniques – which are performed locally, on the CPU – to upscale the half-size image. As well as reducing total processing time, the workflow is aimed at users with less powerful GPUs, making it possible to “output an 8K ray traced image, even if your hardware does not support it”. The feature is currently still in beta, and may result in loss of detail in fine textures, and does not currently upscale other rendered output, like depth, specular and material ID maps. The hybrid ray tracing system introduced in Lumion 2023.0 has been extended to support water and fog. The Water Material now supports ray tracing, with the water surface now appearing in reflections for all of the ray traced materials in a scene, and vice versa. The update also introduces support for ray traced volumetrics, for effects like environment fog. As well as standard light sources, ray traced fog can be illuminated by emissive materials, volumetric lights and volumetric sun light, and can interact with volumetric clouds. The feature is still in beta, and is described as resource-intensive, with some known limitations. Workflow improvements include a new Scene Inspector, for searching for objects, layers and hierarchies in a project, making it easier to manage complex scenes. For performance troubleshooting, the title bar gets a color-coded ‘speedometer’ showing the current fps of the project, with more info displayed when hovering over the icon. Lumion’s accompanying asset library gets 200 new objects, with 68 new nature assets, including seven high-detail photogrammetric trees; plus 14 new materials. The update is compatibility-breaking, so projects saved in Lumion 2025.0 cannot be opened in older versions. Lumion has also changed the pricing and licensing for the software. The old $749/year Standard subscription, which provided access to Lumion’s core features and around half the total library assets, has been discontinued. The price of the Pro subscription has been reduced to $1,149/year, down $350/year, although the old floating licenses have been replaced by named-user licenses. However, there is also a new $1,499/year Studio subscription – the old price for Pro subscriptions – that includes a floating license of Lumion Pro, plus a named-user license of Lumion View, Lumion’s new SketchUp plugin. https://lumion.com/news/lumion-2025-release https://support.lumion.com/hc/en-us/articles/19336065135516-Lumion-2025-0-Release-Notes Kiri Engine 3.14 The 3.14 update extends those Lidar capabilities, introducing an extra processing step to refine the raw scan data, which is usually much lower-quality than that captured with a professional laser scanner. It makes use of two new machine learning models: StableNormal, for estimating surface normals from source images; and Prompt Depth Anything for depth estimation. The extra step reduces noise in the scan data, and improves resolution of small details: you can see before-and-after comparisons in the video at the top of the story. AI processing is primarily intended for use with Kiri Engine’s Scene Scan mode, for capturing entire environments – for example, for quickly scanning a set during a live shoot as later reference for VFX work. Although Lidar scanning, including Scene Scan, is available with free Basic accounts, the cloud processing requires a paid Pro subscription. Kiri Engine is available for Android 7.0+ and iOS/iPadOS 16.1+. The app itself is free to download. Free Basic accounts include photogrammetry, Lidar scanning, and Object Capture, and can export three 3D scans from the cloud per week. Pro accounts cost $17.99/month or $79.99/year, and make it possible to export an unlimited number of scans, and unlock advanced features, including the 3DGS toolset. https://rdk2hgu4np.feishu.cn/docx/UUh5d31PhoNVScx9mJfcZpl5nuf Marvelous Designer 2025.0 Widely used by games and animation studios, Marvelous Designer lets artists design 3D garments in the same way as real-world clothes, by stitching virtual 2D pattern parts. Users can import an animated character model, drape clothing over it, then export the result back to a 3D application in Alembic, FBX or USD format, as an OBJ file sequence, or as a Maya cache, PC2 or MDD cache. Key changes in Marvelous Designer 2025.0 include the new keyframe animation system. The software already had basic keyframe animation capabilities, but they were primarily intended for editing simulation caches: to fix artifacts, or to blend or retime simulations. The 2025.0 update extends those capabilities to keyframing the position of a 3D avatar’s joints, the properties of the fabric or the simulation, and keyframing the properties of wind animations. Other animation-related changes include IK joint mapping for imported 3D avatars. Characters whose joint naming conventions match Marvelous Designer – including those from Daz 3D, Adobe’s Mixamo library, Reallusion’s Character Creator, and Unreal Engine MetaHumans – can be converted automatically. Other characters can be converted manually, by mapping joints in the user interface. A separate Auto Convert to Motion feature automatically converts imported FBX files to Marvelous Designer’s MTN format. The update also introduces experimental support for fur. The Fur Strand material is supported in the viewport and test renders, and the fur moves automatically with the underlying fabric. Users can adjust the shape, length, thickness, curl, color and variation of the fur strands; and can export the strands to other DCC appliations and game engines in USD format. There are also a lot of updates to existing features, including Auto Sewing, Auto Fitting, the Sculpt tool, UV Editor, and Avatar Editor. Highlights include the option to automatically generate UVs for the sides and back of a garment, not just the front, and better auto-conversion of garments to all-quad meshes. It is also now possible to recolor the base color map when exporting PBR textures, to create two-way zippers in garments, and to export data in USDZ format for use in AR apps. Workflow improvements include a new modular library, making it possible to save parts of garments as modular building blocks, making it possible to quickly create style variations. The new library window is integrated with the CLO-SET online collaboration platform and its Connect asset marketplace. The UI has also been updated to streamline workflow and support the new features. Marvelous Designer 2025.2 is available for Windows 10+ and macOS 12.0+. The software is rental-only. Personal subscriptions cost $39/month or $280/year. Enterprise subscriptions cost $199/month or $1,900/year for a node-locked license, and $2,000/year for floating licenses. https://www.marvelousdesigner.com/learn/newfeature?v=2025.0 Vantage 2.8 Key changes in Vantage 2.8 include support for light linking, via include and exclude lists. The change makes it possible to choose which lights affect which objects in a scene, making it easier to art direct renders, particularly the position of shadows and highlights. Vantage also now supports the Lens Effects from the V-Ray Frame Buffer (VFB), including bloom and glare, and mimicking the effects of dust and scratches on the lens. Vantage also now supports the V-Ray Switch Material, used to switch quickly between variant materials on an asset during look development. It is also now possible to use source videos as bitmap textures: for example, when rendering TV screens or animated billboards. Vantage supports the H.264, H.265, AV1, VP8 and VP9 codecs. The update also adds support for the parallax room shader used by third-party architectural visualization asset providers wParallax and Evermotion. Performance improvements include support for Shader Execution Reordering (SER) providing “performance gains of up to 80%” on current NVIDIA Blackwell and Ada generation GPUs. Vantage also now supports new AMD integrated GPUs with 16 or more compute units. Chaos Vantage is compatible with Windows 10+ and DXR-compatible AMD, Intel or NVIDIA GPUs. The software is rental-only, with subscriptions costing $108.90/month or $658.80/year. https://www.chaos.com/blog/vantage-2-update-8-is-now-available https://docs.chaos.com/display/LAV/2.8.0 Uniform 1.4 It’s a major update, adding a complete new SDF modeling system. Also used in Adobe’s Substance 3D Modeler, Signed Distance Fields make it possible to quickly create 3D forms by performing Boolean operations on volume objects. In Uniform, users work on the new SolidVolume object, which can later be converted to a standard GeoMesh for final editing, and to export the 3D model. Other key changes in Uniform 1.4 include new features for users aiming to use the app to create models for 3D printing. They include the option to set the world scale, measure tools for checking real-world dimensions of models, and exporters for the STL and 3MF file formats. There are also new tools for UV unwrapping and packing models manually. You can see the workflow in the video attached to this post on X. Workflow improvements include a new customizable input system with support for Apple Pencil Pro gestures, and the option to export directly to iCloud. The UI rendering code has been rewritten, and now supports light and dark UI themes. Performance improvements include GPU batching of brush strokes, improving drawing performance, particularly on narrow strokes and symmetry passes. Uniform is compatible with iPadOS 18.0+. It is designed for use with the Apple Pencil, so some features may not be accessible through touch gestures alone. It costs $49.99. https://apps.apple.com/us/app/uniform-3d-editor/id6472727759 Photoshop 26.6 Photoshop 26.6 updates the software’s AI-powered Object Selection tool to make it easier to select and edit people within images. When hovering over a person, Photoshop now automatically identifies individual details that can be selected, as shown in the image at the top of this story. They can include facial features like the eyes, mouth and teeth; facial skin; hair – with facial hair and eyebrows coming up as separate details; and individual items of clothing. Once selected, the details can be edited independently of the surrounding image: for example, to quickly change the color of someone’s eyes or clothes while retouching. The functionality can also be accessed via a new Select People button in the options bar. Generate Image, the Firefly-powered generative AI feature rolled out in Photoshop 25.11 last year, has also been updated. As well as typing in a text prompt to guide the image that Photoshop generates, users can now upload an existing image to act as a composition reference. Photoshop then generates a new image in the style of the text prompt, but with a composition – or in the case of images of people, a pose – matching the reference image. Other new features include Adjust Colors, for making quick color adjustments to images. Accessible from the contextual task bar, it launches on-canvas controls for adjusting the Hue, Saturation and Lightness of colours within an image: either the six Prominent Colors that Photoshop has identified automatically, or the color selected with the eyedropper tool. The same controls in the Properties panel for Hue/Saturation adjustment layers have also been redesigned, with larger sliders and swatches. In addition, it is now possible to do the processing for the Select Subject and Remove Background tools in the cloud, for “faster, more precise” results. Outside the stable release, the beta build of Photoshop features a reimagined Actions panel. As well as letting you record your own macros, it now suggests five multi-step edits that can be applied to the image or previewed on the canvas. It is also now possible to search for actions – both the readymade ones bundled with Photoshop and your own custom actions – using natural-language text prompts. Photoshop 26.6 is compatible with Windows 10+ and macOS 12.0+. In the online documentation, it is also referred to as the April 2025 release or Photoshop 2025.5. The software is rental-only, with Photography subscription plans that include access to Photoshop costing $239.88/year. Single-app Photoshop subscriptions cost $34.49/month or $263.88/year. https://helpx.adobe.com/photoshop/using/whats-new/2025-5.html Corona 12 Solo subscribers now get access to Chaos Scans as part of Cosmos, for use in rendering, but only Premium subscribers can edit the scans. The update also includes 18 new LUTs created by visualization artist Iraban Dutta. Workflow changes common to both host applications include a per-camera Global Volume override in the Corona Camera, for more control over volumetric effects like fog and haze. The experimental DOF Highlights Solver, which increases the speed at which depth of field effects render, at the expense of slower rendering of other effects, now gives correct results when highlights are seen in reflections or some refractions. 3ds Max users get faster timeline scrubbing when interactive rendering is enabled, and an update to the live link to Vantage, Chaos’s real-time renderer. Cinema 4D users get support for for more objects and material types in the Scene Converter, which automatically converts scenes originally created for the V-Ray renderer. That includes common objects like V-Ray Proxy and V-Ray Clipper; the V-Ray Triplanar, V-Ray Two-Sided, and V-Ray Blend materials; and the V-Ray Decal and V-Ray Enmesh systems. There is also new minimap in the Node Material editor, helping to navigate complex materials. Outside the core application, functionality for creating immersive virtual tours has been added to Chaos Cloud, Chaos’s cloud rendering platform. Corona is also compatible with Anima 6, the latest version of Chaos’s crowd animation system for 3ds Max and Cinema 4D, which added a new traffic simulation system. Corona 12 Update 2 is compatible with 3ds Max 2016+ and Cinema 4D R17+. The software is available subscription-only. Corona Solo subscriptions are node-locked, and include access to the Chaos Cosmos asset library; Corona Premium subscriptions are floating, and also include Phoenix, Chaos Player and Chaos Scans. Corona Solo subscriptions cost $59.90/month or $394.80/year. Corona Premium subscriptions cost $72.90/month or $514.80/year. Additional Corona Render nodes cost $172. https://www.chaos.com/blog/corona-12-update-2 https://docs.chaos.com/category/corona Open-sources PhysX’s GPU NVIDIA has fully open-sourced the SDK for PhysX, its real-time physics system, and Flow, its gaseous fluid simulation system. Whereas previous releases came with compiled binaries for GPU acceleration, the releases of PhysX 5.6 and Flow 2.2 include full GPU source code. NVIDIA originally partly open-sourced PhysX in 2018, adding gaseous fluid simulation library Flow in 2022. However, in previous releases, only the CPU-side code was fully open-source: GPU support was provided via pre-compiled binaries. The latest releases – the PhysX 5.6 SDK and Flow 2.2 – include the GPU source code, making both technologies fully open-source. That means that it would be possible for developers integrating PhysX into their tools to support AMD or Intel hardware for GPU acceleration, although it would be a lot of work to do so fully. NVIDIA’s blog post notes that PhysX contains over 500 kernels written for CUDA, its GPU compute framework. The source code for PhysX SDK 5.6 is available on GitHub under a 3-clause BSD licence. It can be compiled to run on Windows 10+ or Linux, and is tested on Ubuntu 20.04+. You can find build instructions for Windows and Linux on GitHub. The source for Flow 2.2 is provided in the same repository, also under a 3-clause BSD licence. https://github.com/NVIDIA-Omniverse/PhysX/discussions/384 https://github.com/NVIDIA-Omniverse/PhysX LightWave 2025 LightWave 2025 is the third major update to the software since late 2023. It follows a three-year hiatus during which development was suspended by previous owner Vizrt, which acquired NewTek, LightWave’s long-time developer, in 2019. The software’s current owner, LightWave Digital, is a start-up whose management team comprises people who were closely involved with the software in its NewTek days, including former NewTek staff, plus key add-on developers and LightWave users. Major changes in LightWave 2025 include RiPR (Real-time Path Rendering). The GPU-accelerated path tracing system is available for viewport previews as an alternative to the standard VPR (Viewport Preview Renderer) or GL previews. RiPR is intended to provide more visually realistic previews for look development or visualization work, supporting HDR lighting, depth of field, and better handling of transparent materials. It’s built on NVIDIA’s OptiX ray tracing framework, so it requires a NVIDIA GPU – a GeForce 10 Series or Quadro Pascal card or newer – and is currently limited to a single viewport. New 3D modeling tools include SuperPatcher, for capping holes in quad meshes. The Displacement Brush makes it possible to paint surface details like wrinkles or cracks, at least onto polygonal geometry: it doesn’t work with SubPatch models. It is also now possible to edit the normals of meshes on a per-model basis via the new SuperNormals system. Suggested use cases including fixing model-specific shading artefacts, creating custom edge styles, or for exporting game assets that require specific shading in different game engines. LightWave 2025 also includes Construct, a new procedural tool for generating structures like “stairs, decks [and] bridges” for architectural visualization or set design work. Its Stair Calculator can be used in both Modeler, LightWave’s modeling application, and Layout, its scene layout application. RHiggit!, the modular character rigging system integrated into LightWave in LightWave 2024, gets three new tools: Steppit!, Handdit! and Pickkit!. Steppit! is an automated walk cycle generator. It works for both biped and creatures, and its output can be combined with standard keyframe animation, to modify the motion cycles generated, or to add secondary animation. Handdit! is a dedicated hand- and finger-animation system. It provides controls for posing hands, either globally or finger-by-finger, and can be accessed within its own tab, or in the RHiggit! and Steppit! menus. Pickkit! is a rig picking interface. It provides quick access to commonly adjusted rig points like shoulders and knees, and is “fully compatible” with the Steppit! and Handdit! interfaces. The LightWave integration for OctaneRender, included with LightWave since LightWave 2023, has also been updated. Changes include three new gradient nodes, and workflow improvements including updates to the Render Layers panel. Another headline feature in LightWave 2025 is the new Toon Filter, a “comprehensive post-shading tool” for viewport previews and final renders. It generates outlines around objects, or polygons within an object, and can be combined with the existing Cel Shader to create cel-animation-style looks. Users can control the thickness of the outlines, how they are layered, and how they scale with depth in the scene; and can shade and animate individual outlines independently. As with the previous LightWave Digital releases, LightWave 2025 integrates legacy third-party plugins from the NewTek era: this time, from developer Denis Pontonnier. The integrated DP Tools include four of Pontonnier’s add-ons, including DP Verdure, for creating polygonal trees, foliage and grass. The Rman collection is a set of shaders and textures ported from Pixar’s RenderMan renderer, DP Filter provides post-processing effects, and DP Kit is a set of nodes for the node editor. Other changes include support for Python 3 for scripting. Python 2 is still supported, although it has been deprecated since 2020, with most other CG applications moving to Python 3 several years ago. The update also adds a new glTF converter, used for converting assets on the fly for display using RiPR, but which also makes it possible to export models and animations in glTF format. LightWave 2025.0 is compatible with Windows 10+ and macOS 10.15+ (macOS 11.0+ for Apple Silicon Macs, and macOS 13.3+ to use the Octane renderer). New licenses cost £795 (around $1,055). https://lightwave3d.com/information-pages/new-feature/ https://docs.lightwave3d.com/2025/2025-change-log.html Shapelab 2025 To that, the latest update – Shapelab 2025 came out last year, so it’s officially Shapelab 2025 v2.0 or just the ‘Spring Update’ – adds support for voxel-based sculpting. The technique, also supported in applications like 3DCoat, represents models as a solid grid of 3D pixels, rather than as polygonal surfaces. It makes it possible to model without topological constraints, removing the need to subdivide a mesh to add detail, and making it easier to punch holes through objects. However, it makes it harder to perform other operations, so each approach has its own merits. Leopoly pitches Shapelab’s new voxel engine as a secondary toolset, intended primarily for quick sketching and form exploration, and for blocking out rough forms for sculpts. The company describes the release as “laying the groundwork for hybrid workflows that combine the structural precision of polygonal meshes with the fluid sculptability of voxels”. Users can either sculpt voxel objects from scratch, or convert an existing polygonal mesh to a voxel object for editing. In the initial release, the voxel sculpting brushes include Voxel Clay, for adding or removing volume from a voxel object; Inflate/Deflate Voxel, which can be used to draw strokes; and Smooth Voxel, for softening geometry. It is also possible to use voxel shapes as 3D stamps. However, the toolset is still officially a work in progress, with known limitations including a lack of support for masks or Boolean operations. Updates to existing features include support for cavity masking, which masks a surface according to its local curvature, making it possible to target only cavities or bumps. In addition, “most” of the sculpting brushes now include custom falloff properties, enabling users to control the area of a sculpt they influence by adjusting the falloff curve. It is also now possible to use custom 3D shapes with the Stamp Tool. Workflow improvements include a new ‘Quick Switch’ shortcut, making it possible to select any object or layer and jump directly into Edit Mode with a single thumbstick gesture. There is also a new quick cursor repositioning system when sculpting, and the Quick Access Wheel pie menu now supports a wider range of actions. Shapelab 2025 is compatible with Windows 10+. Find a list of supported VR headsets here. Perpetual licenses have a standard price of $64.99. Subscriptions now cost $29.99/year, down $30/year since the previous release. https://leopoly.atlassian.net/servicedesk/customer/portal/3/article/827260936 https://shapelabvr.com/ Anima 6.0 Anima 6.0 is the first update to the software in over two years, and it’s a pretty big one, adding Vroom: a complete new traffic simulation system. As with the crowd simulation tools, it’s intended to provide an intuitive way to achieve visually plausible results, without the complexity of more detailed simulations. According to Chaos, it can simulate “hundreds” of animated vehicles to populate city environments in visualizations, but has “limited scope” for traffic engineering studies. Vroom uses a straightforward step-by-step workflow to simulate traffic. The Road tool lets you draw road networks in a 3D scene, then adjust their width and the number of lanes, the direction of traffic flow, and traffic rules. Vehicles can then be placed manually on the roads, or generated automatically, with options to control their density, the frequency of individual vehicle types, and driver behavior. Vroom then simulates the motion of the vehicles, including wheel movement and steering behavior, with built-in physics for vehicle inertia and response to bumps in the road. There are also some major changes under the hood in Anima 6.0, including an overhauled user interface, and a new 3D viewport engine. It is described as a “modern multi-target renderer” with support for DirectX 11, DirectX 12 and Vulkan – the OpenGL subsystem has been removed – improving performance and memory use. The library of stock 3D assets available with Anima subscriptions has also been expanded, increasing the cultural diversity of the character models, and adding over 40 animated vehicles. Anima 6.0 is not available for Maya. Anima 6.0 is available for 64-bit Windows 7+. The integration plugins are available for 3ds Max 2023+, Cinema 4D R26+ and Unreal Engine 5.3+. The software is now available rental-only, Chaos having discontinued perpetual licenses last year. Subscriptions cost $121.80/month or $717.60/year. https://docs.chaos.com/display/ANIMA/What's+New+in+Anima+6 https://docs.chaos.com/display/ANIMA/6.0.0 ZibraVDB A recent wave of restructurings across major VFX studios — including Technicolor, MPC, The Mill, and Jellyfish Pictures — has sent shockwaves through the industry, highlighting the growing need for more agile, modern production pipelines. Studios face shrinking budgets, tighter timelines, and an ever-growing demand for visual effects that can be rendered or tweaked almost instantly. Legacy OpenVDB workflows, with their heavy storage needs and offline-only approach, simply aren’t designed for real-time iteration, limiting their role in evolving production models like virtual production. These problems become painfully clear on virtual production sets, where teams require high-fidelity effects in real time rather than relying on slow, resource-intensive CGI workflows. ZibraVDB offers a future-aware way to render complex volumetric effects in real time without eternal scene processing and enormous hardware requirements. It could be the next standard in OpenVDB workflows for virtual production, gaming, and beyond, dealing with emerging industry problems and establishing efficient new processes at studios around the world. Volumetric effects once relegated to offline processes can now run in real time, thanks to ZibraVDB. The technology has already been adopted by Dimension, ILCA, and other industry leaders, and SideFX supports ZibraVDB in its SideFX Labs add-on tools for Houdini. ZibraVDB doesn’t just modernize volumetric data handling: it redefines it for the era of immediate feedback and budget-conscious production. With near-instant preview and iteration, you can incorporate high-fidelity effects in-camera on set, rather than being tethered to an expensive post-production stage. As the demand for real-time workflows rises, studios need to rethink their pipelines to stay competitive and cost-efficient. ZibraVDB compression reduces the size of traditional OpenVDB files by up to 98%, drastically cutting down storage requirements and bandwidth overhead. The new free version of ZibraVDB now offers up to five compressions for free, making it possible to convert up to five VDB sequences into .zibravdb format. This means that smaller teams can test real-time volumetrics without up-front licensing fees. It’s ideal for those on the fence, or simply curious about bridging offline and real-time pipelines. The Houdini plugin is included in the standard subscription, ensuring that everyone can use ZibraVDB for advanced volumetric effects directly within the leading procedural software, and bringing an instant drag-and-drop workflow between Houdini and Unreal Engine. For larger companies needing an even more tailored approach, ZibraVDB Studio subscriptions offer enterprise-level flexibility. This includes custom feature development, offline licensing, high-touch support, and SDK integrations that adapt to a range of specialized workflows. Whether you’re dealing with substantial data sets or more complex real-time requirements, this tier ensures you have direct support, pipeline consulting, and flexible licensing to match enterprise-scale needs. https://www.zibra.ai/zibravdb-pricing?utm_source=article&utm_medium=cta&utm_campaign=cgchannel These advantages resonate powerfully in virtual production, where compressed data keeps GPU loads light and enables real-time rendering by removing bandwidth bottlenecks. Filmmakers and on-set artists can tweak atmospheric or pyrotechnic effects in seconds, letting them finalize complex shots without endless back-and-forth. CGI pipelines see parallel gains: large files move easily across networks, speeding up work on multiple render nodes. While data is compressed, shifting or archiving assets becomes simpler. This is especially crucial for remote teams or those relying on cloud servers. In gaming, developers can compress volumetric effects for real-time usage, enabling cinematic-quality clouds, fog, or fire to appear without ballooning build sizes. And it’s not just for cutscenes any more: ZibraVDB can also be used for live gameplay. ZibraVDB represents a bold move forward, offering seamless real-time volumetric rendering for virtual production, efficient OpenVDB compression for CGI, and optimized asset sizes for games, cementing its position as a new standard in OpenVDB workflows. With flexible pricing options and extensive opportunities for studios, it is the last missing piece in the Unreal Engine pipeline. https://www.zibra.ai/?utm_source=article&utm_medium=cta&utm_campaign=cgchannel Howler 2023 Originally released over 20 years ago, Howler – originally Project Dogwaffle – is an idiosyncratic, inexpensive digital painting and content creation tool. Its core strength is natural media painting, but it also features basic 3D rendering capabilities, primarily for landscapes and foliage, and animation features including a timeline, onion skinning, frame repair, retiming, and an exposure sheet for lip sync animation. Developer Dan Ritchie himself comes from a 3D background, having worked at pioneering broadcast VFX firm Foundation Imaging on Star Trek: Voyager. He later began developing plugins for LightWave, some of which were integrated into Howler. Ritchie comments that “throughout its life, Howler has been more than just software — it’s been a canvas for natural media painting, a robust toolset for animation with features like onion skinning and keyframing, and a powerhouse for image editing and performance optimization. “Many of these technologies were ahead of their time, reshaping the creative world.” Although Ritchie continues to release updates to Howler – the latest, Howler 2025.5 – adds a new text crawl system for animated titles – development has recently been hampered by health problems and the recent theft of his main development laptop. “As I face health challenges and personal hardships, it has become clear that I can no longer sustain this journey on my own,” he wrote in a post on Patreon. “To preserve Howler’s legacy, I hope to make it freely accessible to the wider community. This is my goal — but it comes with exit costs.” To give new users a taste of what the software can do, Ritchie has just released Howler 2023 for free, but now aims to raise $3,000 to release the latest version as freeware. The money will be used to cover back taxes and web hosting for the freeware release. Anyone who wants to support the effort can back Ritchie on Patreon, or simply buy the software: at the time of writing, you can buy the latest version for just $12.99. https://www.patreon.com/posts/help-us-preserve-125460793 http://www.pdhowler.com/WhatsNew.htm Maverick Studio 2025.1 Developed by Arion renderer creator RandomControl and launched in 2019, Maverick Studio is a streamlined renderer aimed at product and automotive visualisation. It provides a drag-and-drop workflow for assigning PBR materials, and for setting up lighting and a render camera, plus a physically accurate spectral render engine with built-in denoising. The software is CUDA-based, and supports out-of-core rendering. It was later joined by Maverick Indie, a lower-priced edition aimed at entertainment work, which lacks support for CAD file formats, plus features like NURBS and cross-section rendering. Maverick Studio and Maverick Indie 2025.1 expand two of the key toolsets introduced in the 2024 releases: the new animation system and USD support. The animation toolset gets a new set of easing modes for keyframes, the option to clone keyframes by [Shift]-dragging in the timeline, and undo support for “most” timeline operations. It is also now possible to import deforming meshes as well as rigid body animation, making it possible to import and render rigged and skinned characters. USD import – described as still “very preliminary” in last year’s releases – has been improved, and is now the recommended way to import non-CAD data into the software. The update also revamps the Move tool, updating both the design of its viewport gizmo and its underlying mechanics, “squash[ing] some long-standing bugs”. Other changes include the option to change the UI theme, and localization of the interface text into 11 languages, including Chinese, French, German, Japanese, Korean, Russian, and Spanish. Maverick Indie and Maverick Studio are available for 64-bit Windows only. Both are CUDA-based, and require a RTX-compatible NVIDIA GPU. You can see a feature comparison table for the two editions here. Perpetual licences of Maverick Indie cost €395.99 (around $449), up €146 on the previous release; rental starts at €19.99/month ($23/month). Perpetual licences of Maverick Studio cost €795.99 ($902), up €296 on the previous release; rental starts at €39.99/month ($45/month). https://maverickrender.com/wp-content/builds/maverick_changelog.html DaVinci Resolve 20.0 For grading, the Color page‘s Color Warper gets a new Chroma Warp tool. It is designed to create looks intuitively, with users selecting a color in the viewer, and dragging to adjust its hue and saturation simultaneously. Among the existing tools, the Resolve FX Warper effect gets a new Curves Warp mode, which creates a custom polygon with spline points for finer control when warping images. Magic Mask, DaVinci Resolve’s AI-based feature for generating mattes, has been updated, and now operates in a single mode for both people and objects. Workflow is also now more precise, with users now placing points to make selections, then using the paint tools to include or exclude surrounding regions of the image. Another key AI-based feature, the Resolve FX Depth Map effect, which automatically generates depth mattes, has been updated to improve speed and accuracy. For color management across a pipeline, the software has been updated to ACES 2.0, and OpenColorIO is supported as Resolve FX. For compositing and effects work, the Fusion page gets support for deep compositing. Deep compositing, long supported in more VFX-focused apps like Nuke, makes use of depth data encoded in image formats like OpenEXR to control object visibility. It simplifies the process of generating and managing holdouts, and generates fewer visual artifacts, particularly when working with motion blur or environment fog. Deep images can now be viewed in the Fusion viewer or the 3D view, and there is a new set of nodes to merge, transform, resize, crop, recolor and generate holdouts. It is also possible to render deep images from the 3D environment, and export them as deep EXRs via the Fusion saver node. Other new features in the Fusion page include a new optical-flow-based vector warping toolset, for image patching and cleanup, and for effects like digital makeup. There is also a new 360° Dome Light for environment lighting, and support for 180 VR, with a number of key tools updated to support 180° workflows. Pipeline improvements include full multi-layer workflows, with all of Fusion’s nodes now able to access each layer within multi-layer EXR or PSD files. Fusion also now natively supports Cryptomatte ID matte data in EXR files. DaVinci Resolve Studio 20.0 also features a lot of new AI features powered by the software’s Neural Engine, although primarily in the video editing and audio production toolsets. The Cut and Edit pages get new AI tools for automatically creating edit timelines matching a user-provided script; generating animated subtitles; editing or extending music to match clip length; and matching tone, level and reverberance for dialogue. There are also new tools for recording new voiceovers during editing to match an edit. Workflow improvements include a dedicated curve view for keyframe editing; plus a new MultiText tool and updates to the Text+ tool for better control of the layout of on-screen text. For audio post work, the Fairlight page gets new AI features for removing silences from raw footage, and automatically balancing an audio mix. Other key changes include native support for ProRes encoding on Windows and Linux systems as well as macOS. MV-HEVC encoding is now supported on systems with NVIDIA GPUs, and H.265 4:2:2 encoding and decoding are GPU-accelerated on NVIDIA’s new Blackwell GPUs. Blackmagic Design also announced two upcoming features not present in the initial beta. Artists creating mixed reality content will get a new toolset for ingesting, editing and delivering immersive video for Apple’s Vision Pro headset. There will also be a new generative AI feature, Resolve FX AI Set Extender, available via Blackmagic Cloud. More details will be announced later this year, but Blackmagic says that it will enable users to generate new backgrounds for shots by entering simple text prompts. Blackmagic Design has just raised the price of the software in the US to $335 to account for the new US import tariffs. Prices in other countries are unchanged. https://www.blackmagicdesign.com/support/readme/bb72012583234d4bbea9f1ca54acbcb5 Cineware for Unreal First released in 2021, the Cineware for Unreal plugin streamlines the process of working with Cinema 4D content inside Unreal Engine. The add-on makes it possible to import a Cinema 4D file into Unreal Engine, edit its parameters, and have the scene update inside Cinema 4D and transfer the changes back to Unreal. You can find more details in this story on the original version of Cineware for Unreal. In the initial release, it was possible to transfer geometry, materials, lights and cameras, but support for animation was largely limited to simple object transforms and material changes. While it was possible to import more complex animations as geometry caches, the workflow did not support skeletal meshes. That changes with the latest update, which makes it possible to transfer rigged and animated characters via Unreal Engine’s new Interchange Framework. To do so, you need to be using Unreal Engine 5.5: while the plugin is also available for previous versions of Unreal, those versions use the older Datasmith API. However, Datasmith currently supports a greater range of Cinema 4D materials, lights and cameras, so it is still available as an option, even in Unreal Engine 5.5. You can see a detailed comparison table of the types of Cinema 4D content you can transfer using Datasmith and the Interchange Framework in the online release notes. Cineware for Unreal 2025.2 is compatible with Unreal Engine 5.3+ and Cinema 4D 2024.0+ on Windows and macOS. https://support.maxon.net/hc/en-us/articles/19332141082396-Cineware-for-Unreal-Engine-2025-2-April-2-2025?_gl=1*622r9r*_gcl_au*MTIxOTM1NTAyNS4xNzQzNjA5ODAw https://www.maxon.net/en/downloads/unreal1 point
-
Not particles or pyro, but still fun - field and volume based cheese 🙂 167_OpenVDB_cheese(MG).c4d1 point
-
Deformers are multiplicative when added on the same object. You need to add each new translation to the one before it. Use the Connect Object to freeze the object's form and then apply the new deformation. You can also use the regular Cube instead of the Extrude... I just kept going with it when trying to find the solution to this. Try this now the Cube is skewed from all 3 directions with no 90 degree angles You can use a Connect Object instead of the Edge to Spline plugin. It just helps me deal with the absence of the appropriate capsule. Rombocube.c4d Question: What does the Rhombohedron has to do with special relativity ? Or do you also teach other classes like analytical geometry ?1 point
-
1 point
-
request from the scene nodes discord server... a quick and dirty method of creating random poly islands: 11-sn-random-poly-islands-BASIC.c4d1 point
-
Actually that was much easier than I expected. You just have to relate the position of your boat to the position of your material tag. In your case you'll probably need more than one material tag. The one with the water properties and one with only the displacement. You mix the two (Add Material option), and don't forget to disable Tile for the displacement material tag. It also have to be set to Flat Projection.1 point
-
I think you can easily do what you want using a Shader Field. Plug that texture to it and have ... Oh wait you need the texture to move .... Hmmm... maybe same as above but use the Field to generate a vertex map ... and then the vertex map as a Shader (Vertex Shader) to re-convert it to a texture. Don't forget to make the field child of the moving object... Unfortunately you still need a lot of polygons. I guess this can also work using XPresso if you relate the Offset U and Offset V of the material Tag to the position of the object but converting from absolute values to percentages always gives me headache .1 point
-
Very pleased to hear that Rocket Lasso is back in action after a very long time away, starting with Season 7 Ep 1 tonight at 8 pm (BST). CBR1 point
-
So for the last couple of days I've been trying to get really deep into digital color science and all the baggage that comes with it. This is all in preparation for upcoming projects and the desire to understand this topic once and for all, at least the basics. So far everything has been working out, from Input Transforms over ACES to Color Spaces etc. This all changed when I got the good old topic of Alpha (oh god help me please) As far as I understood now, and from a seemingly very knowledgeable source, there are basically two types of color encoding with Alpha: Premultitplied Alpha / Premultiplied Color / Associated Alpha Straight Alpha / Unmultiplied Alpha / Unassociated Alpha Before I start, we have to fundamentally clarify two things, important for terminology: RGB describes the amount of color emitted. Not the brightness, or how strong the color is, just the amount of color that is "emitted" from your screen, for each primary color. Alpha describes how much any given pixel occludes what is below it. tl;dr: RGB = Emission, Alpha = Occlusion Premultiplied Alpha ... probably has the dumbest name ever, because intuitively you'd think something is multiplied here, right? Well, that's WRONG. The formula for blending with Premultiplied Alpha looks like this, where FG is Foreground and BG is Background: FG.EMISSION + ((1.0 - FG.OCCLUSION) * BG.EMISSION) What this comes down to is that premultiplied basically saves the brightness of each color independently from the Alpha, and the Alpha just describes how much of the background this pixel will then cover. This means that you can have a very bright pixel and it's Alpha set to 0, so it will be invisible, but the information will STILL be there even though the Pixel is completely transparent. Blending works like this, where foreground is our "top" layer and background is our "bottom" layer that is being composited onto. Check if the current pixel has some kind of occlusion (Alpha <1) in the foreground Scale the background "brightness" or "emission" by the occlusion value (Alpha) (BG Color * FG Alpha pretty much) Add the emission from the current pixels foreground (BG Color from 2. + FG Color) Straight Alpha ... is considered to be a really dumb idea by industry veterans, and often not even called a real way to "encode color and Alpha". The formula looks like this: FG.EMISSION * FG.OCCLUSION) + ((1.0 - FG.OCCLUSION) * BG.EMISSION) What this means is that Straight Alpha multiplies the pixel emission by the occlusion (Alpha), as opposed to having the final emission of the pixel independently saved from the Alpha. If you've every opened a PNG in Photoshop this is pretty much exactly what Straight Alpha is. There is no Alpha channel if you open a PNG in PS, just a "transparency" baked into your layer. All the pixels that are not 0% transparent are not their true color, as Premultiplied Alpha would describe it. I have not read this terminology anywhere, but personally I would kinda call this a "lossy" form of Alpha, since the true color values are lost and are not independent from the Alpha, unlike Premultiplied Alpha. Why am I telling you all this? Fundamentally I just want to check if I understand this concept, because there is so much conflicting information on the internet it's not even funny. I am so deep in the rabbithole right now that I question if some softwares even use the terminology correctly, and C4D is one of them. You know how C4D has this nice little "Straight Alpha" tick box in the render settings? Well, according to the manual it does this: Am I completely crazy now or is this not EXACTLY what I, and the Blogpost I linked above, describes as Premultiplied Alpha? Because we have RGB and Alpha as separate, independent components? Another example, if you just search for "Straight Alpha" on the internet, you might find this image: This is the same story as above. Doesn't the Straight Alpha example look exactly like Premultiplied Alpha, and the example for Premultiplied Alpha like what Straight Alpha really is? I truly feel like I'm taking crazy pills here, and I hope someone more knowledgable in the whole Color Science / Compositing field with can tell me where the hell I am wrong. Did I misunderstand how these two concepts will actually look like in practice, did I miss some important detail, or is there just so much misinformation about this topic EVERYWHERE? If you've made it to here, thank you for listening to my ramblings. I hope I can be enlightened, otherwise this is going to keep me occupied for forever...1 point
-
@b_ewers 1. Change the Maxon Noise > Input > Source to UV/Vertex Attribute, so that the noise samples in 2D Texture Coordinate (UV) space, rather than 3D space. 2. Significantly reduce the overall scale from 100 to something like 1.2 3. Adjust the relative scale to something like 1, 20, 1 to get the vertical streaking. 4. Increase contrast to 1 5. Change the noise type to Turbulence uv-space-noise_v01.c4d1 point
-
1 point
-
I was absolutely perplexed by something that seemed so simple at first. Later on, I acquired some mental trauma from tracking down a particularly nasty bug around alpha and PNGs. You'd probably not be surprised that once one becomes more or less familiar with the nuances around alpha channel handling, that nuanced bugs can crop up in even the most robust software. So... yes. 🤣 This is where I encourage folks to gain enough confidence in their own, hopefully well researched, understanding. Eventually, it enables folks to identify where the specific problem is emerging. The skull demonstration has zero alpha regions within the "flame" portions. Following the One True Alpha formula, the operation should be adding the emission component to the unoccluded plate code values. There is no "scaling by proportion of occlusion" to the plate that is "under", as indicated by the alpha being zero. The author has some expanded commentary over on the issue tracker. The following diagram loosely shows how the incoming energy, the sum of the green and pink arrows, yields a direct "additive" component in the green arrow, and the remaining energy indicated by the pink arrow, scaled by whatever is "removed" in that additive component, is then passed down the stack. If this seems peculiar, the next time you are looking out of a window, look at the reflections "on" the window. They are not occluding in any way! Similar things occur with many, many other phenomena of course. For example, in the case of burning material from a candle, the particulate is effectively close to zero occlusion so as to be zero. Not quite identical to the reflection or gloss examples, but suitable enough for a reasonable demonstration. Sadly, Adobe is a complete and utter failure on the subject of alpha for many, many years. If you crawl over the OpenImageIO mailing list and repository, you will find all sorts of references as to how Adobe is mishandling alpha. Adobe's alpha handling is likely a byproduct of tech debt at this point, culminating with a who's who of image folks over in the infamous Adobe Thread. Zap makes a reference to the debacle in yet-another-thread-about-Adobe-and-Alpha here. Gritz, in the prior post, makes reference to this problem: You can probably read between the lines of Alan's video at this point. So this is two problems, one of which I've been specifically chasing for three decades, and one that relates to alpha. As for the alpha problem, I cannot speak directly to AfterEffects as I am unfamiliar with it, but the folks I have spoken with said the last time they attempted, Adobe still does not properly handle alpha. I'd suggest testing it in some other software for compositing just to verify the veracity of the claims folks like myself make, such as Nuke non-commercial, Fusion, or even Blender. All three of those should work as expected. My trust that Adobe will doing anything "correct" at this point, is close to zero. As for "colour management"... that is another rabbit hole well worth investigating, although it's probably easier to find a leprechaun than pin down what "colour management" in relation to picture authorship means to some people or organizations. Keeping a well researched and reasoned skepticism in mind in all of these pursuits is key. 🤣1 point
-
1 point
-
Faking it can be done with tutorial above using multitude of tools you have at disposal. I think it would be quite interesting to have a physically correct rig for this 🙂1 point
-
I haven't done this myself so far, but I suspect the best way to go here might involve cycloid splines, blend mode cloners and Field Forces. Searching around those terms I found this tutorial from Insydium. Obviously, they are using their own particle system, but there is no reason to think the native one couldn't also do it... though I am probably not the best person to advise on the specifics of that; I don't have much experience in Cinema particles yet ! CBR1 point
-
Apologies... I can only post one post per day. Probably better that way... 🤣 The issue, as the linked video in the previous post gets into, is sadly nuanced. The correct math within the tristimulus system of RGB emissions is remarkably simple. Sadly, it is just that software and formats are less than optimal. The dependencies of the "simple compositing operation" are: 1. Software employed. Many pieces of software are hugely problematic. See the infamous Adobe thread as a good example. 2. File formats employed. Some file encodings cannot support the proper One True Alpha, to borrow Mr. Gritz's turn of phrase, such as PNG. 3. Data state within software. Even if we are applying the proper mathematical operation to the data, if the data state is incorrect, incorrect results will emerge. The short answer is that if we have a generic EXR, the RGB emission samples are likely normatively encoded as linear with respect to normalized wattages, and encoded as gained with respect to geometric occlusion. That is, in most cases, the EXR data state is often ready for compositing. If your example had a reflection off of glass, a satin or glossy effect, a flare or glare or glow, a volumetric air material or emissive gas, etc., you'd be ready to simply composite using the One True Alpha formula, for the RGB resultant emissions only^1: A.RGB_Emission + ((100% - A.Alpha_Occlusion) * B.RGB_Emission) Your cube glow is no different to any of the other conventional energy transport phenomena outlined above, so it would "Just Work". If however, the software or the encoding is broken in some way, then all bets are off. That's where the video mentions that the only way to work through these problems is by way of understanding. Remember that the geometry is implicitly defined in the sample. In terms of a "plate", the plate that the A is being composited "over" simply lists the RGB emission, which may be code value zero. As such, according to the above formula, your red cube RGB emission sample of gloss or glow or volumetric would simply be added to the "under" plate. The key takeaway is that all RGB emissions always carry an implicit spatial and geometric set of assumptions. This should never happen in well behaved encodings and software. If it does, there's an error in the chain! JKierbel created a nice little test EXR to see if your software is behaving poorly. Hope this helps to try and clear up a bit of the problem surface. See you in another 24 hours if required... 🤣 -- 1. The example comes with a caveat that the "geometry" of the sample is "uncorrelated". For "correlated" geometry, like a puzzle piece perfectly aligned with another puzzle piece, like a holdout matte, the formula shifts slightly. The formula employed is a variation of the generic "probability" formula as Jeremy Selan explains in this linked post. If we expand the formula, we end up with the exact alpha over additive formula above. It should be noted that the multiplicative component is actually a scaling of the stimuli, based on energy per unit area. A more "full fledged" version of the energy transport math was offered up by Yule and (often misspelled) Neilsen which accounts for the nature of the energy transport in relation to the multiplicative mechanisms of absorption attenuation, as well as the more generic additive component of the energy.1 point
-
Hello all. Flattered to be mentioned here. I just wanted to point out that the statement is not quite correct; the result will indeed include an emission component in the RGB after the composite. With associated alpha (aka “premultiplied”) the emission is directly added to the result of the occlusion. This happens to be the only way alpha is generated in PBR-like rendering systems, and is more or less the closest normative case of a reasonable model of transparency and emission. It also happens to be the sole way to encode any additive component like a gloss, a flare, a glare, fires or emissive gases, glows, etc. I’ve spent quite a few years now trying to accumulate some quotations and such on the subject, from luminaries in the field. Hope this helps.1 point
-
From what I understand from all this is that the premultiplied method does not keep a separate Alpha map of the image. This means that it encodes the Color and Alpha channels on the same bitmap, hence the name as the process of embedding the alpha map has already been done before previewing/opening the image. The straight one is like the raw data. You have two separate data structures, the color and alpha separate. It gets multiplied to process/view correctly by multiplying the corresponding values after the fact. With this you don't have to extract the alpha map if you need it as it is already available.1 point
-
Asked ChatGPT, gave this answer R: 255, G: 0, B: 0, A: 0 R: 0, G: 0, B: 0, A: 01 point
-
The Classic - Carpet roll! 120_Carpet_Roll(MG+XP).c4d1 point
-
If I were in this situation, I would disconnect my machine from the internet (kill wifi, pull ethernet), because then theres a good chance it will simply timeout after 30 seconds.1 point
-
I haven't watched the series yet myself, but that title sequence is wonderful. And the music is as harmonically interesting as the visuals and techniques are to us CG guys. Perfect alternating consonance and dissonance. For those of you who have an interest in such things, here's Charles Cornell to explain why that's also great ! CBR1 point
-
1 point
-
I started in on the OpenPBR material with the new Redshift. I was surprised to see it in there till I saw that Autodesk also included it in Arnold this last release as well. The future is now... OpenPBR. Saul Espinosa says it's the only shader he uses now (RS + Houdini). Subsequently, I just sent a ticket to maxon to have it included in the default material dropdown.1 point
-
interesting release... one thing I noticed right away is why the spline modifiers and generators have the word 'modifier' and 'generator' after them? there is obviously no need for that as we know its a generator and modifier from its icon colour. also just like the 'Spline Wrap' modifier, the word spline should be at the beginning for eg. 'Spline Branch', 'Spline Break' etc etc... this ensures naming consistency with the rest of the modifers/generators as well as conciseness and being DRY(don't repeat yourself)... I feel like there was a lack of attention to detail there before release... and maybe if they plan on adding more spline tools, it might start deserving its own seperate category like the spline shapes...1 point
-
Maxon Unveils Game-Changing Cinema 4D 2025 Update: Enhanced Modeling, Texturing, and Scene Nodes March 31, 2025 – Los Angeles, CA – Maxon has officially announced the highly anticipated Cinema 4D 2025.1.4 update, promising groundbreaking new features that will redefine the 3D animation and motion graphics industry. This latest iteration of Cinema 4D introduces significant improvements to modeling, texturing, Scene Nodes, and animation workflows, ensuring a more seamless experience for artists. Official Statement from Maxon’s CEO “Cinema 4D 2025.1.4 is not just an update—it’s a revolution. We’ve listened to our users and implemented features that streamline workflows, accelerate creativity, and push the boundaries of what’s possible in 3D design. Our latest enhancements to modeling, texturing, and Scene Nodes will empower artists like never before.” — David McGavran, CEO of Maxon Key Features of Cinema 4D 2025.1.4: 🔹 Enhanced Parametric Modeling Tools – New and improved parametric objects offer greater control, including advanced spline editing and interactive beveling. 🔹 Advanced Scene Nodes 2.0 – A major update to Scene Nodes introduces a more intuitive interface, expanded procedural modeling options, and real-time performance boosts. 🔹 Improved UV Packing and Unwrapping – A completely reworked UV workflow includes automated packing, distortion minimization, and island grouping for better texturing efficiency. 🔹 Neural Render Engine (NRE) – Cinema 4D now features an AI-powered render engine that reduces render times by up to 90% while maintaining photorealistic quality. 🔹 Real-Time Path Tracing – A fully integrated real-time path tracer allows users to see near-final renders directly in the viewport. 🔹 Auto-Rigging AI – Character animation just got easier with an intelligent rigging system that auto-detects joints and optimizes weights instantly. 🔹 HoloC4D VR Integration – For the first time, users can sculpt, animate, and render in a fully immersive VR environment. 🔹 Redshift Cloud Render – A new cloud-based rendering system allows users to offload heavy render jobs and access high-performance GPU farms directly from Cinema 4D. 🔹 Deepfake MoGraph Generator – A new AI-assisted tool that generates realistic face animations from a single image input. 🔹 AI-Assisted Material Suggestion – The new AI-driven material engine suggests realistic textures and shader settings based on your scene context. Spline SDF Smooh blending of 2D splines. Outline Capsule Unlimited number of oulines with the new Outline Node and capsule Chamferer The new Chamferer generator brings non-destructive, parametric editing on individual spline control points Field Distribution Cloner Mode Fields can now be used to place children instances by mapping the hierarchy of the children to the intensity of the field. Release Date and Availability Cinema 4D 2025.1.4 will roll out as a free update for all Maxon One subscribers starting April 1, 2025. Perpetual license holders will have the option to upgrade at a discounted rate. For more details, visit www.maxon.net. Exclusive Interview with Maxon’s CEO, David McGavran Q: What was the main focus for Cinema 4D 2025.1.4? David McGavran: “Our primary goal was to refine and enhance the tools that artists use daily. We wanted to create a more intuitive and efficient experience, whether it's through procedural modeling, advanced texturing, or improved rendering. The new Scene Nodes 2.0 is a huge leap forward, allowing artists to build complex structures with ease.” Q: How does this update compare to previous ones? David McGavran: “While every update brings innovations, this one focuses on practical enhancements that improve workflow speed and creative flexibility. We've also made significant improvements to UV workflows, Boolean modeling, and dynamic simulations based on user feedback.” Q: Can you give us a sneak peek into the future of Cinema 4D? David McGavran: “Absolutely. We are already developing features for Cinema 4D 2026 that will push procedural modeling even further. Expect deeper Redshift integration, better scene organization tools, and an overhaul of particle simulations. We're also exploring more real-time collaboration features to make remote workflows even smoother.”1 point
-
@3DKiwi Nice to see you hanging around here, was wondering if you are still somehow in 3D world : ) Still biking?1 point
-
CORE4D has got a great beginning series: My Scene Nodes 101 series isn't exhaustive but has some simpler examples: For more advanced training, Dominik Ruckli has some great videos: Dominik Ruckli - YouTube Additionally - once you've got a handle on Scene Nodes data structures and beginning/intermediate setups, you can generally follow along with Houdini / Geo Nodes tutorials. For those, it is hard to do better than Entagma: Entagma - YouTube1 point
-
1 point
-
Jut after a few minutes of testing and a few crashes, I think OpenPBR shader doesn't work with Material translation (Viewport/Export) Baking enabled. I was reworking nodes and it kept crashing. I turned it back to Draft and so far no crashes.0 points