Jump to content

EricNS

Premium Member
  • Posts

    116
  • Joined

  • Last visited

  • Days Won

    5

Everything posted by EricNS

  1. The relief object is old and limited. You are better off using a Displacer deformer. This deformer can be used with any polygonal geometry, it works with any procedural and bitmap textures and it supports fields. It's quite powerful.
  2. Unfortunatelly, that won't solve the issue. The shape of your geometry will still change from one frame to another making your texture stretch and slide, whatever the projection. Could you send a simplified scene ? I could give it a try... I might find a workaround...
  3. if the Volume includes animated sources, then its topology changes every frame. It's then impossible to keep a consistent material projection (parametric or UV mapping). I doubt you'll find a solution...
  4. This must be it. This application is probably just slightly AI enhanced photogrammetry, followed by an automatic remeshing and, eventually, 30 seconds of quality control at the end.
  5. As we can see in the demo video, there are different "Generation" quality. My bet is that "Standard" is totally AI generated, while the expensive "High" and "Ultra" involve (human) optimisation and retopology and require a much longer processing. A few things potential users should be aware of : 1. It can't, for now, generate high details realistic models, just very simple "cartoon" objects. A realistic horse, for example, will be dismissed. 2. It doesn't generate textures (which makes it useless for me. Texturing is extremely time consuming, even more than modelling). 3. As indicated in the manual, it requires multiple views to generate good results, ideally: front, side, back. The background should also be clean and monochrome. Theses limitations make this service worthless for me. Paying $300 a months only to be able to generate 20 pathetic untextured low details models is insane.
  6. It's not a rumour, it is clearly indicated in the "How it works": "An input image is submitted and passed through our AI algorithm for reconstruction. The output mesh passes from our Quality Control pipeline to make sure that flaws are eliminated. That process aims to eliminate most of the unwanted technical characteristics of the generated models (i.e. tris, n-gons). A Quality Control team member will check and improve the output where necessary for the 3D model to match our quality standards. Yes, our algorithm is not always perfect, and we want to ensure the output quality is always up to a high standard. The output is returned to the user for further editing and the addition of textures."
  7. Most branded cars where in fact removed from Turbosquid. Many cars manufacturer took legal action, including BMW : https://www.solidsmack.com/cad-design-news/bmw-group-sues-turbosquid-for-selling-3d-models-of-their-car-designs/ Turbosquid Inc. itself is responsible for violating “the Trademark Act of 1946… and New Jersey law through its marketing of 3-D virtual models of vehicles that infringe the BMW Group’s trademark, trade dress, and design patent rights.” For the few cars left on Turbosquid, you need to ask permission to the manufacturer before even being allowed to download the file. And in most case the authorization is declined (for me it never works, whatever the project).
  8. It works fine here. Open the attached abc file in Cinema 4D, you'll get the same results as you original animation. It might be an issue with the Blender importer... 1398921210_alembicbug.abc
  9. The technology Clarisse "invented" more than 10 years ago is now implemented in more flexible and powerful softwares. Handling massive amount of data is no longer an issue. Unreal does it perfectly, in real time (with Nanite, World partition,...). Clarisse simply became irrelevant. There's no valid reason to use it anymore... When it was first released, we all tested it, we all were amazed by it, but nobody adopted it. It was a curiosity...
  10. Watch this presentation instead! It's made by professional. it include amazing technical demonstration. It's ambitious. It's way more interesting than watching someone struggling with Cinema 4D's basic tool set...
  11. EricNS

    Skybox Lab

    Yes it's an issue. I thing Skybox is "pre-programmed" to generate open outdoor environments. Typing "cave", "hall", "cathedral", "dome" or "temple" always generate strange results. Other than this issue the results are stunning, exceptional! The "Scenic", "Realistic" and "Oil Painting" styles are my favourite. "Oil painting" is always fun. Here other tests: I'm extremely enthusiast about these new AI Graphic tools but I also feel a bit depressed... Yesterday I generated one superb environment after another. It was taking me approximately 10 minutes to get valid and interesting interpretations of my ideas. I could have created these environments in Cinema 4D and Photoshop without any premade assets, sure, but not in 10 minutes, not even in 10 hours. Some scenes might have required 100 hours of work! The final render might have been more "personal", sure. But more original? More valuable? More unique? I don't think so... and that's kind of sad...
  12. EricNS

    Skybox Lab

    An AI 360 environment generator : https://skybox.blockadelabs.com/ (Happypolygon mentioned it first in the Nvidia Canvas 1.2 thread). Give it a try. The results are excellent. Here a few of my tests:
  13. I don't get it... Lightwave is no longer developed. The new core announced 15 years ago never saw the light of day. It was a failure. Since 2016 or so, the updates are limited to bug fixing and adding import/exports formats. The development was even officially "suspended" in 2020. Everybody knows that Lightwave is brain dead and that improving its 20+ years old code would be a waste of time (writing a new software would be cheaper and easier). The announcement indicates that this new owner is a "Lightwave UK distributor" and that it will "put together a team to save Lightwave". But who will pay this new team? I doubt the current Lightwave licence sales are enough to pay a single programmer! I absolutely don't see how this development could be funded. Like someone mentioned elsewhere: "catching up with other DCCs these days is extremely hard unless you have A Team, and if you don’t have excessive amount of funding to begin with, you probably don’t have an A Team. And lastly, if you really had an A Team, I am having hard time believing such A Team would decide to start fixing up 30 year old software architecture instead of starting from scratch." https://blenderartists.org/t/lightwave-3d-brought-back-from-the-dead/1458488/12 The only explanation I can find is that this new owner wants to use the "Lightwave" brand for something else...
  14. Thanks a lot! That's way more fun than Nvidia's Canvas 🙂
  15. I'm starting to get decent result. I need to make a lot of adjustments and try many variations to avoid the "stitched look" though... One limitation really bothers me: the depth (distance) is evaluated from the bottom to the horizon line. Canvas expects us to put large structures, like mountains, in the back and flat surface, like grass or sand, in the front. Trying to make a canyon or a mountain range is then impossible. I hope this can be improved by giving the user a way to assign a distance to the shapes (with a depth map, overlay or z-position). I also wish we would have control over the illumination. It's limited to selecting a "style". We cannot yet adjust the sun position. Here a few tests, in Panorama mode:
  16. Assets catalog will now be used mostly as food for AI. Adobe and ShutterStock are already transforming their huge images stock into Data Sets. A marketplace selling 2D/3D assets is, already, something of the past. "Acquiring" is now replaced by "generating"...
  17. Do you mean the Panorama / Equirectangular mode? I'm working with Canvas 1.4.3. One thing I've noticed is that Panorama is more accurate than the Standard 2D mode. It also generate better atmospheres. I don't know why... Here are two tests. The maximum resolution is 4096x2048. It's unfortunately not enough for an environment map. I hope the next version goes up to 8192x4096. Sure, but I was expecting more flexibility. I thought I could do sci-fi landscapes, otherworldly environment. It seems to be less "generative" than other AI solution...
  18. I did a few tests with it. It's of course a major achievement but I didn't find it as powerful as I imagined. 1/ The results are good but not accurate. Canvas seems to put the priority on global coherence rather than fidelity to the shapes and colors. In all my tests Canvas skipped details and removed information. In the image I upload here, for example, the clouds rarely match my drawing. It's as if Canvas was blurring the sketch before processing it. 2/ Canvas works with a predefined structure: a front view with a centered vanishing point and a centered horizon line. It's impossible to use a different camera angle. We can't for example use a top view. 3/ Canvas is made to create realistic image based on existing landscapes. Abstract drawings - for example a mountain on a cloud - will generate a mess. It's a shame. The whole purpose of CG is to be creative, to go "beyond reality".
  19. Adobe is in the AI game... https://www.adobe.com/sensei/generative-ai/firefly.html
  20. It can't be anything else than generative AI. Developing other features, like capsules or Bodypaint options, seems futile at this stage. Either Cinema 4D mutates into a strong AI CG tool or it will die very quickly. MAXON must act fast. ADOBE is already in the game, with Firefly: https://www.adobe.com/sensei/generative-ai/firefly.html And this Firefly is, in a way, a ferocious competitor to Cinema 4D. It can generate illustration that were typically done with 3D software like Cinema 4D. The difference ? The results are almost instantaneous ! Concerning the Physical Renderer. We shouldn't expect upgrades. Maxon already stated that it won't be updated (the development is on hold since 2014).
  21. It's just a matter of time. What Wonder Studio can do is already a major achievement - It reaches the level of a good pre-visualisation. Just wait a few years and it'll be production ready, beyond your expectations. Swapping characters or actors will be as easy as selecting a source and defining target. AI doesn't have to do polygon modelling, neither retopology, uv mapping, rigging, rendering, tracking, keying, compositing, grading, ... It uses a totally different approach. The process is more like sampling, learning and re-synthesis. The next generation of artists won't even remember what UV mapping was about ! Your threads make me think your are still sceptical about AI. But the debate is over. AI will replace the software we use. It will also replace most of the people working in the CG and VFX industries.
  22. It's naive. AI Graphic is beyond control. Every style was already copied, mutated, mixed a billion time by AI. Within the AI universe the notion of "source", "originality" or "singularity" doesn't make sense. It's a flow of data... The example used by Glaze is quite strange. I don't see a link between the "Original Art Pieces" by Ortiz and the "Without cloak protection AI generated art pieces". One does not mimic the other. They could have been done by different artists. The AI images are original in a way! I'm also questioning the method used. It's basically a filter that make random variation of style in every artwork. This prevents the AI from "learning" the style of an Artist. The AI will consider that these images were made by different artists. It won't be able to link them together, and to answer "in the style of THIS artist" requests. But who cares? It's too late. The AI has already digested billions of images (photos, illustrations, designs, artworks,...) and understands the style of most famous painters, schools and genre. Protecting now a few artworks with a time consuming process - one hour per image - seems totally futile.
  23. As far as I can see it was only used for (static) concept design. That's an issue with Cinema 4D. For movie production, it rarely, if ever, reach final render. Anyway in less than 3 years, nobody will use 3D software to create concept design, mood boards or matte paintings. There are much faster (AI) solutions. The process will be most likely: make a rough image in Photoshop and let AI make the final realistic render. It's already happening with exceptional results - unbelievably accurate.
  24. EricNS

    EmberGen 1.0

    The official "First" version of EmberGen is out. The most important feature is the new particle engine - perfect to create dust, sand and small debris. I'm also really enthusiast about multilayer EXR export, Raymarch Sharpening and the improved performance. https://forums.jangafx.com/t/embergen-1-0-has-been-released-release-notes/1456
×
×
  • Create New...