-
Posts
1,993 -
Joined
-
Last visited
-
Days Won
110
Content Type
Profiles
Blogs
Forums
Gallery
Pipeline Tools
3D Wiki
Plugin List
Store
Downloads
Everything posted by HappyPolygon
-
Well.... I've asked (and still asking) for a Helix 2.0 with myltiple modes like Logarithmic. Archimedian and Double but primitives won't ever change in fear of compatibility break with older projects. But you could try it just on a Symmetry Object ...
-
Dash The full version of Dash is intended to let artists to build complex environments without having to navigate the Unreal Editor’s interface, working primarily in fullscreen mode in the viewport. It provides readymade behaviors – accessible by typing natural-language search terms into a floating prompt bar – for common scene-building tasks like ‘create terrain’ and ‘apply water’, plus scattering tools for dressing environments. As well as game development, Dash can be used for offline work: for example to create animations, visual effects projects or architectural visualizations in Unreal Engine. You can read more about its features in this story on Dash 1.9, the current release. Polygonflow has now updated the trial version of Dash so that artists can use part of its toolset for free indefinitely. You can now use all of the features for free for 14 days, after which point you have access to the content browser, but not other functionality like object scattering, physics or material blending. The change makes Dash a more fully featured free alternative to Unreal Engine 5’s native Content Browser, with support for asset tagging, and fuzzy, semantic and Boolean search. You can read about its functionality – much of which was added in Dash 1.9, and which makes it quicker to search large collections of UE5 assets – in the online release notes. Dash also integrates with Fab, Epic Games’ online marketplace, and comes with libraries of free IES lights and CC0 assets from Poly Haven. Dash is compatible with Unreal Engine 5+ on Windows 10+ only. Free licenses work in demo mode for 14 days, providing access to the full toolset; then in free mode indefinitely, providing access to the Conent Browser and AI assistant only. https://www.polygonflow.io/try-dash CharMorph CharMorph is a successor to previous open-source Blender character generator MB-Lab – itself a community fork of the popular Manuel Bastioni Lab. Both have now been discontinued: Manuel Bastioni Lab in 2018, and MB-Lab last year, with development work moving on to CharMorph following the release of MB-Lab 1.8.1. According to the development team, CharMorph reimplements “most of MB-Lab’s features”, and uses the same base meshes and character morphs, but does not contain any of MB-Lab’s code. CharMorph makes it possible to create custom 3D characters by starting with one of the base 3D characters included with the software and applying morphs to modify it. The morphs have to be baked destructively before the character is rigged, although it is possible to export them in advance, making it possible to keep iterating on a character design. It is also possible to use CharMorph to modify an existing imported 3D character, including those with different topology. Exported characters can be rendered with Eevee, Blender’s real-time renderer, or Cycles, its main production renderer: you can find more details in the online documentation. Although it lacks some of the features from MB-Lab, such as auto-modeling, CharMorph has a number of advantages over its predecessor, listed on the project’s GitHub page. Key benefits include support for Rigify, Blender’s modular character rig creation system, including for facial rigs, and real-time fitting of clothing. It is also possible to set skin and eye color directly, and displacement is done at material level rather than using the Displace modifier, making it possible to preview the effect in Eevee. But perhaps most significantly, it is possible to use characters generated with CharMorph in any kind of commercial work, including closed-source games. Whereas the base character from MB-Lab is licensed under an AGPL license, CharMorph has three alternative base characters with Creative Commons licenses: either CC-BY attribution licenses, or in the case of the Vitruvian character added in the latest update, a full CC0 license. In a story on BlenderNation, the developers comment that they aim to provide character models that can compete with those from closed-source solutions like Character Creator or Daz Studio, “achieving the level of quality seen in blockbuster movies and next-gen video games” As well as the Vitruvian character, CharMorph 0.4 features a number of other improvements, including the option to download characters or update the add-on directly within Blender. The development team says that it now plans to extend the software beyond humanoid characters, making it possible to create animals and other creatures. CharMorph is compatible with Blender 4.4. It’s a free download. The software is open-source: the source code is available under a GPLv3 license. The individual base characters are available under a range of licenses: the new Vitruvian character is CC0. https://blendercharacterproject.org/ https://github.com/Upliner/CharMorph Yeti 5.2 The update adds two new nodes for resolving intersection issues: Plume, for resolving feather-feather and feather-geometry intersections, and Resolve, for fiber intersections. Other new features include cross-platform file path mapping, following a similar convention to the Arnold renderer, and a new C++ API for evaluating Yeti graphs. The previous 5.1 updates were primarily bugfix releases, and to add support for the current versions of Maya and the supported renderers. Yeti 5.2 is available for Maya 2024+, running on Windows 10+, and RHEL and Rocky Linux 8.5. Indie licenses – one perpetual node-locked workstation licence and one render licence – are available to artists with revenue under $100,000/year, and cost $329. Studio licenses – one perpetual floating workstation licence, plus five render licences – cost $699. Further packs of five render licences cost $399. https://docs.peregrinelabs.com/release-notes Zen UV 5.0 New features in Zen UV 5.0 include the Zen UV Touch Tool, which makes it possible to perform common operations like moving, scaling, rotating or snapping UV islands more quickly by manipulating them directly in Blender’s UV Editor, using an intuitive-looking gizmo. There is also a new Auto Unwrap operator, which provides a bridge to Ministry of Flat, Quel Solaar’s free standalone Windows-only UV unwrapping tool. Workflow improvements include a new Zen Sync operator, which preserves edge and face selections when switching between working modes. There are also new options for selecting intersecting or stretched faces, new control options in the trimsheet tools, and updates to Zen UV’s native UV unwrapping algorithm. Zen UV 5.0 is compatible with Blender 4.0+. A single-user license costs $39. The update is free to users of version 4.x. https://zenmastersteam.github.io/Zen-UV/latest/changelg/release_note_5.0.0/
-
Darker background. It's supposed to be space after all.
-
Maybe I'll do it using Xpresso... boole switch on editor and renderer visibility
-
@Hrvoje @Donovan K MXN Well... I guess that's one more entry on my suggestions list for future improvements .... I remember people back in R23 when SceneNodes made their first appearance, found a way to import OM objects using some kind of Alembic node ... is there any similar hack ? I have imported a collection of splines from Blender that are impossible to draw-by-hand. In Blender they are parametric but have no idea how to reconstruct them in Scene Nodes so I figured that simple boole of a collection is enough for me for now.... AHA! There's a Classic Object Import ! .... Hmmm... I can't make it have an Output Port... could be a bug ?
-
This is what I get with RMB Should I import a mesh from OM with some other way ? I just drag-n'-dropped it ... (C4D 2025.0.1)
-
Chasm's Call - Latest Render Challenge from Pwnisher - Final Stream
HappyPolygon replied to HappyPolygon's topic in News
WINNER -
How do I save a primitive as op ?
-
There's a Mesh Primitive in Scene Nodes that packs a bunch of other objects under one menu. How can I construct something like it ? I can't find a way to access it's underlying node structure to see how it's made (Edit Asset is disabled).
-
Chasm's Call - Latest Render Challenge from Pwnisher - Final Stream
HappyPolygon replied to HappyPolygon's topic in News
FINALS -
AMD FSR 4, AFMF 2.1 and RIS 2.0 Introduced in 2021, FSR is a GPU-based image upscaling system, equivalent to NVIDIA’s Deep Learning Super Sampling (DLSS) and Intel’s Xe Super Sampling (XeSS). It enables software to render the screen at lower resolution, then upscale the result to the actual resolution of the user’s monitor. The workflow improves frame rates over rendering natively at the higher resolution, without significant loss of visual quality if the upscaling isn’t too extreme. As well as games, FSR is supported by some CG software, including real-time visualization apps Lumion, where it is used in the editor and render preview, and D5 Render. Perhaps learning from the previous major release, FSR 3, which was only supported by two games at launch, AMD has made FSR 4 available as a drop-in replacement for FSR 3.1. Users with new Radeon RX 9070 Series GPUs can upgrade any game that already supports FSR 3.1, the current version of the technology, to FSR 4. That includes Call of Duty: Black Ops 6, God of War: Ragnarök and Marvel Rivals: you can see a list of the “30+ games” in which the upgrade is or will be available on AMD’s blog. The update provides a “significant” improvement in image quality over FSR 3.1 upscaling, improving temporal stability and detail preservation, and reducing ghosting. AMD Software: Adrenalin Edition 25.3.1 also updates AMD Fluid Motion Frames, its in-driver system for increasing frame rates by generating frames between those rendered conventionally. It was originally rolled out last year as part of HYPR-RX, AMD’s set of technologies for improving in-game performance. AFMF 2.1 improves the visual quality of the frames generated: according to AMD, it reduces ghosting, restores details, and handles on-screen text overlays better. As an in-driver technology, it works across “thousands of games”: AMD’s blog post has performance comparisons for Baldur’s Gate 3, Borderlands 3, Far Cry 6 and Forza Horizon 5. The Adrenalin Edition 25.3.1 release also features the first major update to contrast-adaptive image-sharpening system Radeon Image Sharpening since it was introduced in 2019. RIS 2.0 provides “stronger, more responsive sharpening in more use cases”, particularly video playback. As well as games, RIS is supported in a range of other apps, including web browsers like Chrome, Edge and Firefox, Microsoft Office apps, and the VLC media player. The FSR 4 upgrade feature, AFMF 2.1 and Radeon Image Sharpening 2.0 are available as part the 25.3.1 release of AMD Software: Adrenalin Edition. The FSR 4 upgrade is available for software that already integrates FSR 3.1, part of version 1.1.13 of the FidelityFX SDK. The source code is available under an open-source MIT license. FSR 4 and RIS 2.0 require a Radeon RX 9070 Series GPU; AFMF 2.1 is supported on Radeon RX 6000 Series and newer GPUs, and integrated graphics on Ryzen AI 300 Series CPUs. https://community.amd.com/t5/gaming/game-changing-updates-fsr-4-afmf-2-1-ai-powered-features-amp/ba-p/748504 Capsaicin 1.2 First released publicly in 2023, Capsaicin is a modular open-source framework for prototyping and developing real-time rendering technologies, primarily for games. It is designed for developing in broad strokes, creating simple, performant abstractions, not low-level hardware implementations, and is not intended for tuning high-performance tools. The framework is intended for developing Windows applications, but is GPU-agnostic, requiring a card that supports DirectX 12/Direct3D 12 and DXR (DirectX Raytracing) 1.1. AMD has used it in the development of its own rendering technologies, including an implementation of GI-1.0, its real-time global illumination algorithm. As well as the GI renderer, the framework includes a reference path tracer. Other features include readymade components for Temporal Anti-Aliasing (TAA), Screen-Space Global Illumination (SSGI), light sampling, tonemapping, and loading glTF files. The framework also includes HLSL shader functions for sampling materials and lights, spherical harmonics, and common mathematical operations, including random number generation. To that, Capsaicin 1.2 adds support for rendering morph target (blendshape-based) animations, in addition to its existing support for skinned characters. The update also adds support for meshlet-based rendering, for streaming and decompressing high-resolution geometry at render time, in a way similar to UE5’s Nanite system. Other new features include support for the .dds texture file format, used in games including Elden Ring and GTA V, and bloom and lens effects from AMD’s FidelityFX toolkit. The update also adds a range of new tonemappers: the framework now defaults to ACES tonemapping, with support for Reinhard, Uncharted2, PBR Neutral and AgX, the latter now supported in Blender, Godot and Marmoset Toolbag. The Capsaicin source code is available under an open-source MIT license. It can be compiled for Windows 10+ only, and requires a GPU that supports Direct3D 12 and DXR 1.1. Compiling from source requires Visual Studio 2019+ and CMake 3.10+. https://gpuopen.com/learn/new_content_released_on_gpuopen_for_amd_rdna_4_on-shelf_day/ https://gpuopen.com/capsaicin/ Flow Studio Autodesk has rebranded Wonder Studio, its AI-powered online platform for inserting 3D characters into video footage, as Autodesk Flow Studio. The change in branding officially makes Wonder Studio part of Flow, Autodesk’s cloud services platform for media and entertainment, but there are no changes to availability or pricing. In separate news, an update to the platform last month makes it possible to extract a camera track or clean plates from footage as standalone services, or ‘Wonder Tools’. In separate news, an update to Wonder Studio last month made extracting clean plates and camera tracking data from source footage available as standalone services, or Wonder Tools. Unlike when processing a complete Live Action project – that is, generating a new rendered video and supporting data – the new Camera Track Wonder Tool can be used on footage that does not include an actor. In contrast, the Clean Plate Wonder Tool can only be used to remove human actors from footage, not animals or objects, although it can be used for up to four actors per shot. The Autodesk Flow Studio platform is browser-based, and runs in Chrome or Safari. It does not currently support mobile browsers. Lite subscriptions have a standard price of $29.99/month or $299.88/year, can export rendered video at 1080p, and can export mocap data, clean plates, camera tracks and 3D scenes. Pro subscriptions have a standard price of $149.99/month or $1,499.88.year, raise the maximum export resolution of 4K, and also make it possible to export roto masks. Usage is credit-based: processing one second of video uses 20 credits. Lite subscriptions include 3,000 credits/month, Pro subscriptions 12,000 credits/month. The Terms of Service give Autodesk a non-exclusive licence to use any content created via the platform to develop its AI models. GIMP 3.0 The GIMP development team has released GIMP 3.0, the first major update to the free and open-source image editing and retouching software in seven years. The release adds non-destructive layer effects and text styling, off-canvas image editing, support for the Adobe RGB color space, and better interoperability with Photoshop. There are also a number of long-awaited usability improvements, including the option to multi-select layers. The first major update to the GNU Image Manipulation Program since 2018, GIMP 3.0 is a correspondingly significant release. The main change is support for non-destructive layer effects, making it possible to edit filters and image adjustments like Curves and Hue-Saturation after they have been applied. Adjustments can also be toggled on or off, re-ordered, or merged. Non-destructive (NDE) filters can be saved in GIMP’s XCF file format, making it possible to continue to edit layer adjustments after re-opening a file. The options for styling text have also been extended, with the Text tool getting new options for generating outlines around text non-destructively. A separate GEGL Styles filter – GEGL being the Generic Graphics Library, GIMP’s image-processing engine – also generates drop shadows, glows and bevels, as shown above. Other new features include support for off-canvas editing, with a new Expand Layers option for the paint tools making it possible to paint beyond the current boundaries of a layer. The layer is automatically resized to fit the new paint strokes, up to the boundaries of the image. There is also an experimental new selection tool, Paint Select, which makes it possible to select parts of an image for editing by progressively painting in the selection. GIMP 3.0 also features some long-awaited workflow improvements: notably, the option to multi-select layers, channels and paths. Previously, it was only possible to edit multiple layers by selecting and linking them individually. It is also now possible to organize layers into layer sets, and to search for layers by name; and copying and pasting now creates a new layer by default, not a floating selection. There is also a new Welcome dialog (shown above), which provides quick access to documentation, recent files, and UI settings, including color themes, and icon and font scaling. In addition, the Search command now shows the menu location of the action you have searched for in the search results. The release also introduces support for RGB color spaces beyond simple sRGB, making it possible to load and edit images with Adobe RGB color profiles without losing information. The change also “lays the groundwork” for support for the CMYK and LAB color spaces. Interoperability with Photoshop has been improved, expanding support for the PSD file format, and making it possible to load JPEGs and TIFFs with Photoshop-specific metadata like clipping paths, guides and layers. It is also now possible to import color palettes in Adobe’s ACB and ASE formats, as well as the open-source SwatchBooker palette. Other changes include support for new lossless image compression formats QOI and JPEG XL, and for DDS game texture files with BC7 compressions. And while it isn’t currently possible to edit in CMYK mode, it is possible to import and export CMYK JPEG, JPEG XL, TIFF and PSD files, with GIMP converting from RGB color space. GIMP 3.0 also introduces support for more languages for developing scripts and plugins: it is now possible to develop add-ons using JavaScript, Lua and Vala as well as C. The update also switches GIMP from Python-fu to standard Python 3. The changes break compatibility with add-ons written for GIMP 2.10, although some popular plugins like G’MIC are already available for GIMP 3.0. GIMP 3.0 is compatible with Windows 10+, macOS 11.0+ and Linux, including Debian 12+. Source code is available under a GPLv3 license. The software is free, but if you want to support future development, or support its core developers, you can find details of how to do so on the Donate page on the GIMP website. https://www.gimp.org/release-notes/gimp-3.0.html Feather 1.1 First released in 2022, Feather lets artists create 3D concept designs on iPads. It uses an interesting workflow, in which you first draw 3D guide surfaces, then draw strokes on top of the guides, rotating the developing sketch in 3D using touch gestures. The 3D models can be exported to other DCC apps in glTF or OBJ format, while a free Blender add-on makes it easier to edit assets created in Feather in the open-source 3D software. Users can also export 2D images, videos and turntable animations directly from the app. In 2024, the software, which had been available free in early development, became a paid app, and the Chrome web app was temporarily discontinued. Feather 1.1 improves workflow in the app when using an Apple Pencil stylus. Squeezing the Apple Pencil now brings up a customizable menu with commonly used tools and commands, including a new Find Group command to help navigate complex scenes. In addition, tools now respond dynamically when hovering the Pencil over them: for example, hovering over a 3D guide lets you preview brush size and color before beginning to draw. Users with the new Apple Pencil Pro also get haptic feedback when erasing lines, making selections, or sampling colors and brush properties. Sketchsoft pitches the tactile feedback as making sketching feel more immersive, increasing precision. Feather 1.1 is compatible with iPadOS 16.0+. A perpetual license costs $14.99. https://support.feather.art/docs/whatsnew NVIDIA Blackwell RTX PRO GPUs 96GB VRAM NVIDIA has unveiled the RTX PRO Blackwell series, its new range of professional workstation, laptop and data center GPUs based on its current Blackwell GPU architecture. The first two desktop GPUs, the 96GB RTX PRO 6000 Workstation Edition and RTX PRO 6000 Blackwell Max-Q Workstation Edition, will be available through distribution partners next month. NVIDIA pitches the RTX PRO 6000 Blackwell Workstation Edition – the larger, more power-hungry version of the card – as “the most powerful desktop GPU ever created”. The other desktop cards – the RTX PRO 5000, RTX PRO 4500 and RTX PRO 4000 Blackwell GPUs – will be available later in the year, along with the laptop and data center GPUs. NVIDIA describes the RTX PRO Blackwell series, which is being marketed at agentic AI, simulation, 3D design and VFX, as a “revolutionary generation” of GPUs with “breakthrough accelerated computing, AI inference, ray tracing and neural rendering technologies”. The new cards are the first professional GPUs based on NVIDIA’s Blackwell architecture, following the rollout of their consumer counterparts, the GeForce RTX 50 Series, earlier this year. All feature the latest iterations of NVIDIA’s key hardware core types: CUDA cores for general GPU compute, Tensor cores for AI operations, and RT cores for hardware ray tracing. NVIDIA claims that their fifth-gen Tensor cores and fourth-gen RT cores offer “up to 3x” and “up to 2x” the performance of their counterparts in its previous GPU Ada Lovelace architecture. Their Streaming Multiprocessors, into which the CUDA cores are grouped, have “up to 1.5x faster throughput” than their predecessors, and feature new Neural Shaders that integrate small neural networks into programmable shaders. The new GPUs use GDDR7 memory and PCIe 5.0 buses, increasing data transfer bandwidth over the Ada Generation cards, and support the current DisplayPort 2.1b display standard. The RTX PRO 6000 and 5000 cards also support NVIDIA’s Multi-Instance GPU (MIG) technology for partitioning a single GPU into smaller instances. NVIDIA has only released full specs for the top-of-the-range RTX PRO 6000 Blackwell cards so far, but they are a clear step up from the previous-gen RTX 6000 Ada Generation. Hardware core counts and compute performance are significantly higher, while GPU memory doubles to 96GB, and memory bandwidth almost doubles. Power consumption is unchanged – at least for the RTX PRO 6000 Blackwell Max-Q Workstation Edition, which features the same standard cooling design as its predecessor. That figure doubles to 600W in the RTX PRO 6000 Blackwell Workstation Edition, which uses a double flow-through design, although the increase in compute performance is much smaller. The specs for the lower-end cards, which also use standard cooling designs, still contain a number of ‘TBCs’, although we would expect them to follow a similar pattern. What we don’t yet know is how the Blackwell cards will stack up in terms of price-performance, since NVIDIA hasn’t announced recommended pricing. NVIDIA has also announced its RTX PRO Blackwell series laptop and data center GPUs. We don’t usually cover either on CG Channel, but you can see NVIDIA’s summary chart for the laptop GPUs above: as well as counterparts for the desktop RTX PRO 5000 and 4000 Blackwell, there are four lower-end cards, the RTX PRO 3000, 2000, 1000 and 500 Blackwell. Specs for the RTX PRO 6000 Blackwell Server Edition can be found on NVIDIA’s website. The new RTX PRO Blackwell series cards mark a change of branding from the current RTX Ada Generation GPUs, with the new ‘PRO’ tag clearly targeting them at professional users. What constitutes a professional user is perhaps less clear, since NVIDIA now acknowledges in its marketing for its Blackwell consumer cards, the GeForce RTX 50 Series, that they are used by both “gamers and creators”, with many CG artists using them for rendering. Rather than video production or VFX, the use cases cited in NVIDIA’s release announcement for the RTX PRO Blackwell cards are computational AI, health care, and design visualization. As a result, it’s hard to assess how the new GPUs’ specs will translate into real-world performance in CG applications. Of the examples given, the closest to DCC work is Cyclops, architecture firm Foster + Partners’ GPU ray tracing product for view analysis, which is described as running at “5x the speed” on the new RTX PRO 6000 Blackwell Max-Q Workstation Edition as on the NVIDIA RTX A6000 – its counterpart from two GPU generations ago, which is now over four years old. Electric vehicle manufacturer Rivian is also quoted as saying that the RTX PRO 6000 Blackwell Workstation Edition delivers “the most stunning visuals we have ever experienced in VR” in Autodesk’s VRED visualization software, but no actual performance figures are given. The RTX PRO 6000 Blackwell Workstation Edition and Max-Q Workstation Edition will be available via PNY and TD SYNNEX in April 2025, and via workstation manufacturers in May 2025. The RTX PRO 5000, 4500 and 4000 Blackwell desktop GPUs will be available “in the summer”. RTX PRO Blackwell laptop GPUs will be available via Dell, HP, Lenovo and Razer “later this year”. NVIDIA hasn’t announced recommended prices for the new GPUs yet. https://www.nvidia.com/en-us/products/workstations/professional-desktop-gpus/ Unity 6.1 and Roadmap For artists, Unity 6.1 brings improvements in rendering performance, including a new Deferred+ rendering path in the Universal Render Pipeline (URP) for mobile and web games. It improves performance over the existing Deferred rendering path in complex environments, using “advanced cluster-based culling” to support more real-time lights. Both the URP and High Definition Render Pipeline (HDRP) get support for Variable Rate Shading, making it possible to set the shading rate for custom passes, improving performance without significantly affecting visuals. Variable rate shading is supported via Vulkan on Android and PC, via DirectX 12 on Xbox and PC, and on the PlayStation 5 Pro. Developers of Windows and Xbox games also get improvements in DirectX 12 performance, with a new split graphics job threading mode submitting commands to the GPU faster. According to Unity, it leads to a reduction in CPU time of “up to 40%”. DirectX 12 ray tracing performance has also been improved via Solid Angle Culling, to avoid rendering very small or distant instances, improving CPU performance by “up to 60%”. There are also a number of more general optimizations, leading to a reduction in ray tracing memory usage of “up to 75%”. The other new features primarily affect programmers as opposed to artists. They include a new Project Auditor for static analysis, which analyzes scripts, assets, and projects settings to help identify performance bottlenecks in a project. Build automation is also now integrated into the Unity Editor. There are also changes to platform support, particularly for Android games, including support for the larger 16KB page sizes introduced in Android 15. It is also now possible to match Vulkan graphics configurations to different Android devices, filtering by vendor, brand, product name, and OS, API and driver versions. Developers of extended reality experiences on Android get a number of changes, including integration with key Unity toolsets like AR Foundation and the XR Interaction Toolkit. Unity 6.1 also introduces support for Instant Games on Facebook and Messenger, and WebGPU support for mobile web games. Unity 6.1 is currently in public beta. The stable release is due in April 2025. The Unity Editor is compatible with Windows 10+, macOS 11.0+ and Ubuntu 22.04/24.04 Linux. Free Personal subscriptions are now available for artists and small studios earning under $200,000/year, and include all of the core features. Pro subscriptions, for mid-sized studios, now cost $2,200/year. Enterprise subscriptions, for studios with revenue over $25 million/year, are priced on demand. https://unity.com/releases/editor/beta https://unity.com/blog/unity-engine-2025-roadmap Features due later in the Unity 6.x series include a new Mesh LOD system. Unity describes it as providing “compact automated LOD generation in-editor”, making it possible to generate levels of detail for both static and skinned meshes directly inside the Unity Editor, rather than having to configure LOD levels in an external 3D modeling app. Unity’s new animation system will also be available as an official preview later during the Unity 6.x release cycle. Changes include support for procedural rigging for any skeletal asset, not just characters; and for remapping animations across assets with differing size, proportions and hierarchies. The slide above also namechecks a new animation blending system, with support for per-bone masking and layer blending, and pose correction via a new underlying rig graph. The new animation system will also feature a reworked hierarchical, layered State Machine capable of scaling to “thousands” of characters. At 32:27 in the video, you can see a demo of a crowd animation with 1,000 characters running in-editor at 30fps, although Unity didn’t say what hardware configuration it was running on. Changes to the physics system include a swappable physics backend, making it possible to switch between physics engines in project settings. The slide above mentions “initial support” for Havok Physics and PhysX, but Bullet Physics and MuJoCo were also namechecked in the presentation. The native Unity Physics system will get new solvers for “more complex and reliable” behaviors. Interface designers get updates to Unity’s UI Toolkit. Key changes include the option to render the UI directly in world space, for more immersive XR experiences, and to apply post effects like blur or color shifts. Support for vector graphics will reduce asset file sizes, and enable assets to scale across device screen sizes without loss of visual quality. It will also be possible to modify the ubershader without recreating it, allowing for “detailed adjustments to text, graphics and textures through familiar Shader Graph workflows”. Unity also announced updates to Unity Muse, its in-editor generative AI toolset. Changes include support for video-to-motion as well as text-to-motion for animation, making it possible to generate “nuanced animations” from smartphone reference footage. There will also be a new library of LoRAs for generating sprites, pre-trained for use cases like icons, props and platformer backgrounds. 3D mesh and texture generation and skybox generation are further off: presumably in the Unity 7.x release cycle. Other changes covered in the video include new live collaboration features, making it possible to edit files locally and have revisions synced automatically to the cloud. Programmers get improvements to the Entity Component System, the Unity Profiler, and new live game development features, covered in detail in the final section of the video. However, several key upcoming features that Unity had previously announced were not mentioned during the GDC presentation, and will presumably not arrive in 2025. They include updates to the world-building tools, the new unified renderer and Shader Graph 2, all previewed at Unite 2024 last year. You can find more details in this story. https://unity.com/blog/unity-engine-2025-roadmap
-
the plugin is free https://www.dropbox.com/scl/fi/grkckoxnbnziu4vwjsbja/edgeToSpline2024Jan.zip?rlkey=rop9b4zs1ffde5uua9il1vvfe&e=1&dl=0 Sketch and Toon is an effect of the native standard renderer of C4D since 2008
-
-
I tested this in 2025.0.1 and it works as in the video.
-
This isn't parametric though... It requires a lot of manual edge and point selection, but once you are OK about the grid density you can tweak the gravitational strength and the center of it (it's the FFD Deformer) And requires a plugin from Noseman. If you have MAXON ONE you may be able to replace it with the Edge to Spline capsule if it supports selections.
-
The denser the "space" mesh is the smoother its lines will be. You can make a dense mesh and choose what edges should be rendered making it look like this If it's RS you use to render I can't help you. But Sketch and Toon I can.
-
Tags is also yellow ... it could be remnants from last update... (Tags belong to the Create Menu) If he's boasting about rigs imported to UE the File Menu should also be in yellow...
-
His latest update was 4 days ago about ZBrush I think the next release will be on 20th of April ... almost a month from now
-
Well, if all clones are the same object you could just put multiple copies of it with different Materials under the Cloner...
-
Object guided by two pivot points on one rail
HappyPolygon replied to Benjamin Mueller's topic in Cinema 4D
OK, this is very easy. No XPresso I promise. This is a step-by-step guide because there are *ahem* issues when uploading scene files... Create your Rail Spline. When I test things like that I want to be sure it works for all possibilities so I chose an irregular Spline ... Sine Wave Made it editable so I could elongate it Now put your guiding Null on a Cloner. Object Mode. Assign the Rail-spline to it. Make only two instances of the Null - we need only two guide nulls. Adjust the End to bring the 2nd instance closer to the 1st. The distance does not matter we'll fix it later. Make a Tracer and put the Cloner in it. Set it to Connect All Objects. Now create the object to be sliding... I made this shit Make sure you position its Pivot Point to the end/edge of it. Make a new Cloner. Set that to Object Mode. Assign the Tracer to it. Use one one instance. Now adjust the End of the first Cloner to match the other end of the object as close as possible... Mine was at 11.5% Now if you animate the offset you get what you need But if you want multiple objects to slide that's a different method, but shorter. You'll need these ingredients A Cloner A Target Effector A Geometry Axis Node A Geometry Orientation Node Set the Target Effector to Next Node and Use Pitch. The Cloner is as usual to Object mode and fed with the Rail This time I used just Cubes You just need to adjust the End to regulate the distance between instatnces. Yours won't look perfect with the first try, mine did not either. That's why I used the two nodes. The first was to adjust the Pivot of the Parametric Cube (you might not need it cause it's a custom mesh and you can move the Pivot wherever) The second one was to properly align the orientation of My cube along the Spline as nothing else could fix it. And there you have your train You can use a copy of your spline to make the instances appear as if they are always just touching the "now-invisible" Rail or use a Plain Effector to Offset them a bit higher ... -
Object guided by two pivot points on one rail
HappyPolygon replied to Benjamin Mueller's topic in Cinema 4D
No, I'm just wondering why you don't use the Spline Warp Deformer or use the Cloner's Spline mode if it'a about a modular object later... Is there a specific reason you want to drive the position of the Slider Object using those Nulls ? In your scene the Slider Object is baked. Don't you want it to deform along the spline ? If you make the object deform along the spline you can simply make the nulls children of it and they will remain at those parts of the object.