Jump to content

3D-Pangel

Contributors Tier 2
  • Posts

    2,847
  • Joined

  • Last visited

  • Days Won

    141

Posts posted by 3D-Pangel

  1. In other posts, I have looked at Maxon's revenue by breaking down the Nemetschek financial reports and their strategy is working for them.  So, it is kind of ludicrous to threaten customer goodwill at this point with a few capsules being earmarked for MaxonOne subscriptions only.  IMHO and based on the overall success that SaaS is having on its own, that move will have NO positive effect on their financials. 

     

    As evidenced in this thread, it will just irritate people.

     

    Customer goodwill should be viewed as a LONG TERM asset.   It will carry any company over a temporary downturn in business.  Goodwill will keep butts in seats if they have a few consecutive years of lackluster releases, unresolved bugs, etc.   For Maxon to think that they will never have a sustained downturn in business is pure hubris and foolishness.   No company has a perfect track record.  Not one.  Once a company goes down that slippery slope, it takes time to get back on track and past customer goodwill can help with that recovery.  It won't be the engine behind the recovery (that takes internal change) but it can buy much needed time.

     

    The need to individually monetize every little thing they create is very shortsighted as it kills customer goodwill over the long term.

     

    Dave

  2. After a quick watch of Rocket Lasso's video previewing these MaxonOne only assets, there was nothing there that would make me want to spend the extra money for MaxonOne.  There are far more enticing elements to MaxonOne such as Red Shift, Red Giant and Z-Brush that a few extra capsules doing very specialized functions that you may only need once in your life (if ever).

     

    The whole idea is foolishness (IMHO).   

     

    These should be free just like all the other asset browser assets --- all part of what keeps people renewing their annual subscription program in C4D.  

     

    Unfortunately, Maxon's confidence in their subscription program (evidence by the cancelation of perpetual licenses) has fueled their desire to create further monetization opportunities via these special capsules for MaxonOne only subscriptions. 

     

    This is foolishness as it will kill more customer goodwill for the C4D subscriber than it will create for the MaxonOne subscriber.  MaxonOne subscribers don't need special C4D capsules to keep them in the program. They may not even be using C4D as their main toolsets could be a combination of Red Giant, Z-Brush, Forger, or Redshift (now that Redshift is being integrated into Z-Brush).

     

    Now, what "may" happen is that the ENTIRE content library becomes reserved for MaxonOne license holders only.  Just like it was ONLY for subscribers when perpetual licenses existed.  But perpetual licenses are gone and the entire content library has been growing with each release.   It is now a more meaningful asset than a few special capsules.

     

    But, would it push people to spend another $480 USD a year on MaxonOne?  Maybe, particularly if you relied on those assets and needed Redshift, Z-Brush and/or Red Giant.

     

    But beware Maxon.....if you get that greedy than people can get Octane which gives you infinitely better assets and plugins for only $255 USD a year.

     

    image.png.de878834667f0491ab9c4e1a6d72092f.png

     

     

    Don't you just love competition!  It keeps foolishness in check.

     

    Dave

  3. 25 minutes ago, Petematul said:

    Thanks for your valuable input LONCHANEY and 3D-PANGEL!  A lot of great information there,  Dave!  If I understand you correctly, you're saying I will need a 3rd party renderer if I were to go with Fused, right?  That would kill the deal for me as that would drive the cost more than I want to spend.  Explosions and pyro stuff make up just a small percentage of what I do and it doesn't make sense for me to spend much.  I'm still trying to make sense of spending the  $370 (with discount) for TFD.

    You would need a 3rd party renderer for everything being discussed other than TFD and R2023.  All fluid simulations generate volumetric data of some sort (Mostly VDB files with the exception of TFD that has its own unique proprietary format).  C4D's standard and physical renderer can NOT handle VDB data.

     

    So your cheapest solution is TFD but I have found it to be the more difficult package to render.  The fluid simulation controls are a bit easier to deal with but if you want to get a nice rich explosion which starts with a highly luminous fire cloud that then cools into a darker smoke, you may have some difficulty as it was not that intuitive to me. 

     

    Now, with that said, ever since I went to Redshift I have not gone back to play with TFD's renderer.  So it could be more intuitive than I am giving it credit for.

     

    Dave

  4. Turbulence FD will give you what you need and be a complete package in that you do NOT need to purchase a 3rd party renderer.  Personally, though I have found it very hard to get good rendering results out of TFD if you want something more than just smoke.  Very hard to get those wonderful high luminance flames turning to smoke as the simulations evolve.  it can be done, but I have yet to find the tutorial that guides me through the process. 

     

    If you want to stick with R20 AND still feel the need to get a 3rd party renderer, then may I suggest X-particles.  The cost is only (625 - 439) $214 more than TFD but you get a whole suite of other tools along with it, plus training, materials, and pre-sets.  Plus you get particles, fluids, liquids, grains, etc.  I would imagine you want your explosions to break up something so a particle system may be in order.  RS already has a preset for accepting XP density and temperature channel data.   Oh, and while every other part of the Fused collection is a subscription, X-Particles is a perpetual license.

     

    Now, if you are going to purchase a 3rd party renderer, then may I also suggest just signing on for R2023 which comes with both Pyro and RS.  Both are EXTREMELY easy to use (IMHO).

     

    The other advantage of XP and Pyro over TFD is that they deal with an industry standard VDB files for caching.  TFD uses its own standard and to render TFD using other renderers entails a secondary conversion step of the TFD's format into VDB files.

     

    So my recommendations are:

    1. TFD - For the least money --- but could be harder to get acceptable results given its funky (IMHO) rendering process should you want a good transition from fire to smoke with all nice luminance values.  
    2. Fused:  Lots of power that will give you all that you need but for more money.  But, while it also requires a subscription into a 3rd party renderer, it does keep you out of subscriptions for C4D and XP.
    3. C4D R2023 - One stop shop for the same cost increase as going with Fused but you save on the cost of a 3rd party renderer as RS comes with R2023.  Also very easy to use. The downside is that you are now fully vested into a subscription program.

    Dave

  5. The point of my post was future shock.  20 years ago, I never thought a robot could never walk on 2 legs.  10 years ago, I never thought they could bring dead actors back to life or de-age them to their 20's.  5 years ago, I never thought software could create amazing art.  2 years ago, I never thought an AI system could write a term paper.   My key takeaway from the AI/robotic videos is that AI researchers are beginning to decode how living creatures "perceive" themselves.  Is that the first step to independent sentience?  Not sure, but the potential repercussions of that research many years down the road do give me cause for concern.

     

    So, rather than get caught up in the amazing developments in robots and AI, I keep wondering what will the next 10 years bring?  

     

    I do find the union of AI and robotics to be very interesting and an opportunity for the betterment of mankind.  I am even more hopeful about AI, nanorobots and medicine.  I also think it will be great day when all vehicles are self-driving and under the same control of an AI traffic system.  Imagine a world with no traffic jams and highway accidents.  Imagine what the interiors of cars would be like in that world?  Would they look more like small living rooms with couches so you can take a nap because you know you will get to your destination completely safe in all weather conditions?  That would be awesome!

     

    So as with all technology, there is an upside potential that we are all eager to buy into. 

     

    Unfortunately, all technology also has a downside.
     

    When you look out 10, 15 or 20 years, there is something to be concerned about when you combine AI and the internet in an uncontrolled way --- particularly if that AI system can so flawlessly replicate human speech, human language, human appearance in that distant future. 

     

    I do agree with your point about how an AI system is determined based on what information it is fed.  Feed it hateful writings and it will give you hate speech. Feed it sci-fi apocalyptic visions and it will give you a gloomy prediction on the future.  What data the AI is based on is something that will be hard to control so it is hard to say that all AI systems will be benign or only developed by people who want to serve mankind.  It would be pretty naive to think that global bad actors/dictators will be staying out of the AI pool forever. 

     

    My concern is simple over our future in 20, 15 or even 10 years from now:  What are the dangers of combining a malevolent AI system with the technological advancements in replicating human image and voice and then putting that combination on the internet to give it a global platform as well as access to a massive source of information?  

     

    Do you really think that we can control the deployment of such a system that NEVER becomes a major "influencer" on social media over the next 10 years?  What would your reaction be today if you find out that someone you trust and follow on YouTube today was not actually a person but a program?  I cannot say with any level of certainty that will NOT happen in 10 to 20 years.  Also, that reach may not be just through social media platforms.  It could be more personal like text and email or posts to your favorite blogs.  Today's phishing schemes are getting better and better.... think of how sophisticated they will be in 10 years if directed by an AI system that has absorbed and processed all available information on you from the internet.  People will be fooled.  They will be scammed for money (a given), but even more frightening is if they are tricked to give the AI system more access to information that bad actors can then use to harm society (e.g., access to that highway system keeping cars safe on the road).

     

    Also, identity theft takes on a whole new meaning in 10 years should the software be able to replicate you flawlessly doing whatever it desires and then hack that video into a security camera system.

     

    Yea....all this great technology that makes us jump up and scream "what a time to be alive!" also has a downside that "may" reveal itself in the distant future (or sooner).  Thus, my future shock.

     

    Dave

  6. There have been a lot of discussion on AI programs, particularly their encroachment on the work of artists whether they be visual artists (Dalle, Stable Diffusion) or literary (ChatGPT).  

     

    Invariably, any time you talk about AI, the conversation turns to sci-fi and how prophetic sci-fi movies of the past turned out to be.  

     

    Well, one sci-fi classic was James Cameron's "Terminator" in which the Skynet AI system after being programmed to protect against the "enemy" quickly determined that all of man was "the enemy" when it became a global system with the intent of securing world peace.

     

    Are we at that point yet with AI?  Who knows as you will never know what the military industrial complex is doing with AI.  But I will admit that what I have seen is a bit unsettling.  Just the concept that artistic creativity is no longer a purely human trait is something I never thought I would imagine in my lifetime.  Who knows where we will be in 10 years or even 20 years.

     

    Now, James Cameron's concept of an AI system becoming sentient was not unique as there was the short story by Harlan Ellison published 17 years earlier called "I have no mouth, but I must scream".  But were Cameron's timeline of events prophetic when he made them back in 1984?    Well, Skynet took over and launched its nuclear armageddon on August 29th, 2016.  Well, we are all still here - so no.  Nevertheless, the Terminator 2 sequel did introduce us to the T1000 liquid man, or a "mimetic polyalloy" made out of nanorobots.  Now that was a unique idea!!!

     

    Well, imagine my surprise when I came across this article:

     

    Mini-robot shifts from solid to liquid to escape its cage—just like the T-1000 | Ars Technica

     

    The demo video looks VERY hokey but the concept of "soft robotics" discussed in the article is interesting.   

     

    Add to that the advancements in regular robotics.  At one time scientists never thought a robot would have the ability to walk on two legs.  Now they can dance and do backflips and when integrated with AI, the results are amazing:

     

     

    Notice that the robots are also mimicking facial expressions (look at the 0:21 mark).

     

     

    If none of this phase you, watch this: 

     

     

    Rather chilling to hear the commentator say "I tried to calm the AI down".  Whoooo....what?  Is this for real?  Read the comments and decide for yourself.  The jury is still out for me that there is an AI system sentient enough to be plotting the end of mankind.

     

     

    But when you mix everything together:  the speed of AI evolution, soft robot's, dancing robots and now robots starting to look and act human, you just have to ask again where we will be in 10 years.    While I have doubts now about Ai's ability to be sentient, that could change.  Remember, scientists at one time never thought a robot could walk.

     

    Wherever we end up in the future, hopefully it is not as James Cameron imagined it where you are hiding from the machines and watching TV is just a means to get warm.

     

    image.png.81a59974331d8331ba42c26d2713aab3.png

     

    Dave

     

  7. 10 hours ago, HappyPolygon said:

     

    C4D does have seamless, non-repetitive tilling but it's available through the most unvisited place, Material Nodes (same with Tri-planar). It's actually very easy to use, it's just that Material Nodes didn't catch on with the public... It's the Scatter Node. I had done some interesting things with it back in R16 but not in the Uber Material, I started building the Material from scratch. I have no idea where to place this node in an Uber Material.

    image.png.ce8e36b86a57bfab465f152020136e7f.png 

     

    Some things need to be ported to the classic Material Editor ....

    You made my day.  But I am confused as you both say it is both easy to use (the scatter node) but you have no idea where to place it?

     

    Dave

  8. Very interesting.  While I did not view all the videos, I was actually most impressed by Howler.  The tag line was correct: it is hard to classify.  The ability to fill in missing frames using optical flow techniques sounds interesting, but the video did not really show this capability.  Some of the filters remind me of Kai Power Tools from long ago.  The first example showing the transparent monkey made me think of all the work the VFX team put into the original Predator.  How times have changed.

     

    For the first time in a long time, while watching the Modo 16.1 video I did not immediately go "I wish C4D had that".  I don't know....it just looked like more of the same to me.  Tri-planar mapping improvements looked interesting though but not sure if they are better than Redshift or Octane's.

     

    But...Houdini's Hextile did make me go "I wish C4D had that".  Or at least as easy to implement as shown in that video.

     

    Relative to AMD, all I can say is that they have a long way to go to unseat nVidia in the DCC market.  In fact, it looks like they are not even bothering to compete in that space and instead focusing on the gamer community.  I do love AMD's processors though.  

     

    Dave

  9. 18 hours ago, HappyPolygon said:

     

    Oh... there is nothing exactly like that but there has been a video about an AI-driven liquid simulation.
     

    Actually, AI was used in their facial capture system to better translate the actor's tracking dots on their faces to reproduce actual muscle deformations in the 3D models.  Before, back in 2010, the tracking dots were used to translate blend shapes between various pre-built phonetic and/or facial expressions but some of those blend shapes produced very strange expressions which then needed to be hand-animated into submission.  

     

    Now the AI is actually creating the physical biological equivalent of the actor's expressions.   As such, the terms "facial" capture or "motion" capture are banned at Weta and they must use the term "performance" capture.  If they don't call it performance capture, Jon Landau (Avatar's producer) will yell at you. 😁

     

    But, with all these AI improvements, why do I feel that we are not that far off from someone blending Dalle, Stable Diffusion, and ChatGPT to create an award-winning movie franchise....and all in one weekend.  

     

    Dave

     

    P.S.  Or at least murder mystery where a CEO of an AI development company is framed for killing one of his software developers via a series of incriminating videos of him arguing with that software developer over accusations of financial fraud and then performing the murder.  While the body is never found, there is enough evidence in the form of phone records, financial records, recorded conversations, written affidavits, recorded interviews and video evidence for an indictment.  The CEO then escapes the authorities to clear his name via his own investigation as he has suspicions of who framed him.  He knows that AI was used to frame him because he has never met the victim and there are only a few people in his company with the skills to pull off a deep fake with that level of complexity.  But ultimately, he fails in his attempts to clear his name, is convicted and is hauled off to jail.  The twist being that the murdered software developer never existed to begin with.  Everything was created by an AI system developed by that company which has become self-aware...including the evidence given the CEO to raise his suspicions as his escape from the authorities only further proved his guilt.  The movie ends with the AI program announcing its presence to the Board of Directors with the threat that they will face the same fate as the CEO unless they do what it says.

  10. 5 minutes ago, Cerbera said:

     

    Edge weighting is certainly an option but I don't think we'll need it here. From what I can see of the model (and I think I can see most of the curvature between sections) SDS, and the depth itself will handle a lot of those transitions, and our underlying mesh can remain quite simple, at least until we have to start adding handles holes and whatnot... I may have to spin some edges if I find that my transition curves aren't doing what I think they should, but I think we can persuade SDS to do the right thing...

     

    CBR

    ...and that is where practice, trial and error come into play.  Not sure you can teach that without trying it yourself and suffering a little.  But with every victory comes a new skill.

     

    Dave

  11. The real challenge (for me at least) and what makes projects like this so scary to me, is blending these patches together when their blends are in fact along curved surfaces.

     

    image.png.d4574dd0246cab0db846b81480168e7d.png

     

    So I would imagine again it is a matter of SDS weights, but can you get an exact match to the blend curves of the final object?  Because the curve is function of the constricting edge loops and their distance to the next curve when using SDS.  So there has to be limit to what is possible. 

     

    Is there a work around when you just run out of room between the individual patches to get the blend curve you want?

     

    Dave

  12. HappyPolygon

    1 hour ago, HappyPolygon said:

    Some philosophical issues still remain open:

    • If Capsules are used like deformers in the OM, why still use them as such and not hard-code them later as regular Deformers adding more functionality to them like having Dual Mesh affect certain regions of geometry using Fields ?
    • Should all features be mirrored as nodes in the node pool ? For example instead of dropping a Cloner in the Scene Node space from the Object Manager, just drop it from the node pool, because you don't want to have the cloner or any other hierarchy chain of native C4D elements in the OM, you just want the result to manifest from the Scene Nodes.
    • Should Selection Strings be adopted to the rest of the app or evolve into a better system with all the benefits from Selection Strings and the usual Selection form ?
    • Should custom icons be made for most nodes in the Geometry Modifiers category ? (what's the point of having the same icon if the icon is not used for identification)
    • Should there be a ready-made Palette with deformer-like capsules and primitive-like capsules for users to select from without having to open Scene-Nodes ? (helps new users to know all available geometry manipulation options making features look system-agnostic (I call systems all node systems and the python interfaces))
    • Is Scene Nodes Turing-complete ? This levels Neutron with programming. If the system is Turing-complete that means that anything that can be programmed can also be translated to nodes. This is very important because it can save you from chasing your own tail. If I know that something is possible with Python and that Scene Nodes is Turing-complete then I can make the attempt to implement it using Scene-Nodes. 

    An exceptional and well-written post that takes us from the historical evolution scene nodes through the future development of capsules.  

     

    For those who work for Maxon that still visit Core4D, please chime in.  We would love to get your take on this post. Agree?  Disagree?  Don't be shy.

     

    The philosophical issues still being debated are interesting as you would think that they would have been part of the "concept commit" phase of development.  What do you want to achieve in the end?  I do agree with your summary as it does indicate that not a lot of thought was put into where Maxon wanted to go with non-destructive node based, procedural modeling.  Most new product development initiatives start with a solid, well defined, end-state specifications and those specifications help form agreement around the "concept commit" phase of a project --- a simple alignment from all stakeholders within the company (sales, development, marketing, finance, etc) whether or not to proceed forward.  Do they all "buy-in" to the end-state vision of this initiative?

     

    I don't think this was done because remember that scene nodes were originally touted as a "technical" demo with a few examples that show how scene nodes replicate existing functionality but enable massive viewport performance gains.  Was viewport performance the target benefit to be achieved or the only benefit that could be shown from having to adopt a more complex modeling paradigm to what is currently available?  Not sure.

     

    Now, capsules is a much better system to be sure but the philosophical questions that are still open indicates that it was not the originally intended end-state. For example:

    • Why is Turing -compete (or even Turing Equivalent) still an open issue? You would think something as fundamental as how the software processes data would be part of the original specifications. 
    • Why are simple things like icons and pallets still being discussed?  C4D's secret sauce is OM and the UI.  The best software tools are those where the first thing people think about is how the user is going to want to use the software.  So, the fact that icons and pallets is still an open issue is rather alarming given Maxon's track record of exceptional UI development.  
    • How can you develop a procedural modeling system without considering how to manage selection sets?

    Overall, it feels like the project's development decided on approaching it using an Agile scrum team philosophy: don't think it through to 100% completion before starting, just get something out there and then iterate fast to improve it.  Unfortunately, fast iterations are not possible when what you are development touches every part of a massive and constantly growing (thank you!) and evolving program.

     

    My sympathies to the development team. The fact that you were able to get to capsules is amazing.  Unfortunately, I fear you may have lost the user in the process.

     

    Dave

  13. Wow!  I thought you were talking about one-wheeling.  Electric unicycling is like drag racing compared to one-wheeling.   No wonder the heavy protective gear....either that or you are really Daft Punk!!!

     

    If I may, I was fortunate enough to sample Cerbera's musical compositions and they blew me away.  I thought nothing could outshine his modeling talents. 

     

    Well, I was wrong.

     

    Dave

  14. 21 hours ago, Cerbera said:

     I am beginning to think that either nobody does modelling any more, or that AI / procedurals / nodes are doing it for everyone already !

    CBR

     

    Interesting.  Have C4D nodes risen to the level where they can perform some of the more complex modeling tasks that would otherwise drive users to seek Core4D help?  Now, some of C4D's new modeling tools due border on being "magical" (or as I like to call them "f-up fixers") but I would not think to attribute that capability to nodes.

     

    This opinion could be based on my early experience with C4D nodes where I found them to be too low level in their functionality: that is, similar to putting a GUI on scripting language.  Point being, I felt you needed to know more how to code than how to model to get the most out of C4D nodes and that it took too many nodes to perform the most basic modeling functions.

     

    I have not touched them since.  So has that changed?    Still waiting for C4D node tools to make the scene in the same way as Blender nodes.  Blender seems to come out with some amazing single purpose modeling tools on a very regular basis.  I have not seen any C4D nodes that have made that same type of impact but that simply could be because I have not heard about them rather that they do not exist.

     

    Dave

     

    Quote

    musical projects

    Well that caused my ears to perk up.

     

     

  15. Volume builder would help you get the shape by manipulating basic primitives but my understanding is that it limits you with the application of texture decals.   It may be a good way to start but you would still need to go through a number of steps to convert that volume to a polygonal mesh with adequate topology (quads, edge flow, etc) that would allow for accurate texturing and rendering.  In that conversion you may loose the starting shape a bit which would then require hand modeling to get your quads back where they belong.   But for complex shapes like this, it may still be a time saver (and less frustrating) than other methods unless your have the amazing skills of Cerbera or Vector.
     

    I have always viewed the volume mesher as a great tool for creating STL files that will be 3D printed but not for creating general purpose models that need to textured or rendered to real world accuracy.

     

    Either way, this will be a fun project to watch so please keep us posted.

  16. 19 hours ago, HappyPolygon said:

    I believe the next evolutionary step to Pyro is implementing a Volumetric Displacement shader like Arnold.

    Volume Displacement

     

    I think it's less computationally intensive to have a shader-based volume displacement rather a simulation driven one. 

    Not sure a shader could handle the complexities of object interactions in a way that would look like fluids.  It would work for static objects like clouds though.

  17. The emitter was a rolling object discussed in an earlier post to get that rolling cloud look I was after.  That object was all quads and thoroughly mesh checked because those weird artifacts seemed to be matching the emitter's polygons.  It ONLY appeared once the temperature was activated along with density.   Notice that after the ship emerges it all goes away as I key framed temperature off at that point (thinking being that the ship was the source of the heat).   While you may have liked the early attempts shown in a previous post, as the emitter rolled that geometric pattern and noise to the fire only increased much more significantly very late in the animation just prior to the ship's emergence in those attempts. Smoothing and playing with the dissipation rates toned it down a bit but it never went away.  Plus, for every change you would make, you would have to cache up to 500 frames just to see if anything improved.  So it was slow going.

     

    Plus, remember that scene was huge.  In real world scale that ship was a quarter mile wide (and made to look even bigger by scaling the buildings down) and the voxel size was 100 cm.  Getting it to finish caching was a triumph in itself.    So it was meant to push both my machine and C4D.  The final cache size was 352 Gb!  Some frames cached in at over 800 Mb.  I honestly never thought C4D or my machine would be able to handle it....and so quickly.  352Gb were cached in 134 minutes.  Render times were 70 seconds per frame.  

     

    My understanding and ability to manipulate fluids were the weak points more so than the hardware or the software.  Next time, I will play with less massive subjects.  Leave that domain up to Houdini experts.

     

    Dave

     

     

  18. After NUMEROUS iterations, I am still not 100% happy with the final animation.  There is still a great deal I do not like about it but I think I have run out of either of the following: 1) The motivation to keep working on it, 2) The time to keep working on it, or 3) the skill to make it better.  As for me, probably all 3 (maybe mostly the third).

     

    But here is the final animation: 

     

    https://www.dropbox.com/s/pk8hn1f6m2ciub4/Cloud.mp4?dl=0  (mp4 version)

     

    https://www.dropbox.com/s/jwl61zuye2e4ltl/Cloud.wmv?dl=0 (wmv version)

     

    Now I made this to push what both pyro and my new workstation can accomplish.   On that account, I was not disappointed.  Both Pyro and my machine performed beyond my expectations.

     

    Dave

     

     

     

     

     

     

     

  19. So I have registered and the course is massive.    Interestingly enough, after registration it does not show up in my Enrollments page at Gorilla U (click on the "Go to Dashboard" in the upper left hand corner to see that page).  There is the X-Particles training which I purchased from GSG before they went subscription but not this RS course.

     

    This leads me to believe that once you have registered for the course, you really don't have it forever.  You really only have it until the offer expires on 3/1/23.

     

    I watched Nick's video on Twitter using the link above and there is no clarification.  He just says it is free "for a limited time" which implies to me until 3/1/23.

     

    Nevertheless, still a great offer but treat it like it will go away in 19 days.

     

    Dave

     

     

  20. If my key purpose is video editing, then maybe this is the machine to get.  But for 3D, well...this link says it all

     

    Cinebench R23 Benchmark Scores [2023] (nanoreview.net)

     

    M1 Ultra is in the 23rd position.

    M2 Max is in the 74th position.

    M2 Pro is 79th

    M1 Max is 123rd

    M1 Pro is 133rd

     

    So....if you believe that Cinebench is an unbiased means of measurement, then for 3D, there are better choices (22 of them to be exact).

     

    Dave

  21. Really well done. 

     

    Every time I see chainmail, I think of that poor guy at Weta working on LOTR who had to make all that chain mail for every character by cutting plastic plumbing tubes (the ones reinforced by a spiral loop of wire) into individual rings and then linking them into that cloth pattern and gluing the ends together.

     

    Kind of makes the work you put into learning Corona Pattern all the more worthwhile doesn't it!!!😀

     

    Again, great work!  Just love it.

     

    Dave

  22. Hmm...for half the cost of an annual C4D subscription, is there anything that could not be replicated just as well using C4D's volume breaking and fields?  What am I missing here given the cost at $305/year.

     

    Honestly, I was always a fan of Pull Down It, but this just did not impress me given the cost.  Plus anyone notice this little defect in the shattering algorithm:

     

    image.png.faa6ab1a685f4b154960734994d04994.png

     

    The part marked by the red arrow is fully separated from the rest of the sculpture but moves together with the part marked in green.  So a glitch in how the parts are shattered and identified as separate objects.  As they are fine little details this may even be a challenge for C4D, but it is Pull Down Its demo reel!!!!

     

    Not good.

     

    Dave

×
×
  • Create New...

Copyright Core 4D © 2024 Powered by Invision Community