Jump to content

3D-Pangel

Contributors Tier 2
  • Posts

    2,847
  • Joined

  • Last visited

  • Days Won

    141

Everything posted by 3D-Pangel

  1. Sooo....for those who work with MoGrpah....if you look closely, this not just a simple matter of the spline growth along a path that can be controlled with fields. There is bubbling going on at the very beginning and I am not sure that fields can do that. Thoughts? Dave
  2. Interesting comment from a person who has mastered Houdini. Maybe we are more afraid of Houdini then we need to be. But I will agree that Blender tutorials can be a bit hard to follow. I think it has to do with the UI. While 2.8+ is far more useable, there are just too many levels of menus which resorts to an overdependence on hot-keys. Now, hot-keys are the way to go for speed and efficiency but when a tutorial becomes nothing BUT hot-keys (which they all do after the short introductory overview), then they are hard to follow. Invariably the instructor will move so fast that they forget to mention the hot-key but rather just focus on the function which then prompts the student to stop the video, look up the actual command that corresponds to that function, replicate that step on their own computer, and then hit play again. Do that 30+ times in any one tutorial and ultimately fatigue sets in. In that respect, I can see why procedural programs like Houdini (and to some extent C4D) are easier to follow due to their more non-destructive behavior. Every step is up there on the screen. + 1 for that!!! That book is incredible. Just started to flip through it. At 226 pages, there really is a ton of content!!!! Dave
  3. For those who want to dive a little deeper into the logic of shortest path algorithm, a good explanation of what is considered one of the better algorithms (and there are many) is Dijkstras algorithm is found here. If you want to see an example of that algorithm in python code, then look here for some explanation or select the python tab at the bottom of the first link I provided. I looked this up because I wanted to know whether shortest path was calculated on a node-by-node basis (that is at each node point it only selects the nearest node) or whether there was some logic that looked at the overall path length because the actual shortest overall path may entail taking the longest path between any two individual nodes at some points. Pretty cool stuff. A neat application would be for creating industrial piping. Secondary step would be to then and fillets to every spline. Dave
  4. Igor, You have some incredible skill! Honestly. In any other program, your work would be impressive but the fact that you have mastered Houdini to produce at this level of quality should open a few doors for you simply because you ARE A RARITY!!!!! Bravo! Dave
  5. Honestly, I think that when any menu can be conjured up at a random location in your viewport, you really need some haptic feedback in your mouse (e.g., a slight vibration) to tell you that you are over an icon. Otherwise, all eyes are on the menu to read in a circle which (for me at least) tends to slow me down as I am faster at reading left to right or scan top to bottom on a standard menu than reading in a circular pattern. Plus, the constant shift to move my eyes around in a circle can cause eyestrain. I tried C4D's pie menus some time ago and, for the reasons stated above, stopped using them and just put what I needed into my standard layout. It had nothing to do with how the pie menu looked in C4D (don't they all somewhat look the same?) but rather how I interacted with them. But the haptic feedback would help you build the muscle memory the Maya guys were talking about all those years ago when they cited using pie menus is like driving a stick shift. If they had that, then that would be something really "new" that would prompt me to both try them again and get a new mouse. Just my 2 cents. Dave
  6. Aren't pie menus rather old by now? They have been around for a while now in C4D and I remember talks as far back as 2005 discussing pie menus in Maya and how using them invited "muscle memory" similar to the same muscle memory you have when using a stick shift. Is there something new with how pie menus are being implemented today? Dave
  7. I can understand why everyone talks about Blender (it is free). I can understand why everyone talks about Houdini (it is powerful). And I can understand why everyone talks about C4D "on this forum" (because that is where we all came from originally). So of those three, it would be understandable how Blender and Houdini may also be talked about on other forums as well. But is anyone talking about modo anymore? Honestly. Do a quick YouTube search on C4D, and you will find tutorials being posted today and/or this week. Do the same for modo and the most recent one I could find was from 10 months ago. So what is the "word on the street" on modo? Is it worth learning? Did it overcome its stability issues? Is it growing? In short, in the discussion of what programs to learn "after" C4D, no one mentions Modo?
  8. Personally, I just don't like the "idea" of once again creating different service classes within an individual program. And those special capsules are just that. You can't get this feature in C4D unless you upgrade to the "higher" service class offered by MaxonOne. Doesn't that fly in the face of all the noise we heard about the legal and software difficulties of offering an indie license? The license management was too difficult! Managing the different capabilities of a full and indie license was too difficult. Isn't that what they told us? Their arguments made sense because they got rid of Prime, Broadcast, etc citing those same reasons. But for some reason they have gone back to "technically" offering two different sets of C4D capability under two different license models. I guess you can solve those software and licensing problems when creating a "higher" service class as that leads to a higher revenue opportunity, but they remain unsolvable if you are going to a lower service class that people are hoping for with an indie license.
  9. In other posts, I have looked at Maxon's revenue by breaking down the Nemetschek financial reports and their strategy is working for them. So, it is kind of ludicrous to threaten customer goodwill at this point with a few capsules being earmarked for MaxonOne subscriptions only. IMHO and based on the overall success that SaaS is having on its own, that move will have NO positive effect on their financials. As evidenced in this thread, it will just irritate people. Customer goodwill should be viewed as a LONG TERM asset. It will carry any company over a temporary downturn in business. Goodwill will keep butts in seats if they have a few consecutive years of lackluster releases, unresolved bugs, etc. For Maxon to think that they will never have a sustained downturn in business is pure hubris and foolishness. No company has a perfect track record. Not one. Once a company goes down that slippery slope, it takes time to get back on track and past customer goodwill can help with that recovery. It won't be the engine behind the recovery (that takes internal change) but it can buy much needed time. The need to individually monetize every little thing they create is very shortsighted as it kills customer goodwill over the long term. Dave
  10. After a quick watch of Rocket Lasso's video previewing these MaxonOne only assets, there was nothing there that would make me want to spend the extra money for MaxonOne. There are far more enticing elements to MaxonOne such as Red Shift, Red Giant and Z-Brush that a few extra capsules doing very specialized functions that you may only need once in your life (if ever). The whole idea is foolishness (IMHO). These should be free just like all the other asset browser assets --- all part of what keeps people renewing their annual subscription program in C4D. Unfortunately, Maxon's confidence in their subscription program (evidence by the cancelation of perpetual licenses) has fueled their desire to create further monetization opportunities via these special capsules for MaxonOne only subscriptions. This is foolishness as it will kill more customer goodwill for the C4D subscriber than it will create for the MaxonOne subscriber. MaxonOne subscribers don't need special C4D capsules to keep them in the program. They may not even be using C4D as their main toolsets could be a combination of Red Giant, Z-Brush, Forger, or Redshift (now that Redshift is being integrated into Z-Brush). Now, what "may" happen is that the ENTIRE content library becomes reserved for MaxonOne license holders only. Just like it was ONLY for subscribers when perpetual licenses existed. But perpetual licenses are gone and the entire content library has been growing with each release. It is now a more meaningful asset than a few special capsules. But, would it push people to spend another $480 USD a year on MaxonOne? Maybe, particularly if you relied on those assets and needed Redshift, Z-Brush and/or Red Giant. But beware Maxon.....if you get that greedy than people can get Octane which gives you infinitely better assets and plugins for only $255 USD a year. Don't you just love competition! It keeps foolishness in check. Dave
  11. You would need a 3rd party renderer for everything being discussed other than TFD and R2023. All fluid simulations generate volumetric data of some sort (Mostly VDB files with the exception of TFD that has its own unique proprietary format). C4D's standard and physical renderer can NOT handle VDB data. So your cheapest solution is TFD but I have found it to be the more difficult package to render. The fluid simulation controls are a bit easier to deal with but if you want to get a nice rich explosion which starts with a highly luminous fire cloud that then cools into a darker smoke, you may have some difficulty as it was not that intuitive to me. Now, with that said, ever since I went to Redshift I have not gone back to play with TFD's renderer. So it could be more intuitive than I am giving it credit for. Dave
  12. Turbulence FD will give you what you need and be a complete package in that you do NOT need to purchase a 3rd party renderer. Personally, though I have found it very hard to get good rendering results out of TFD if you want something more than just smoke. Very hard to get those wonderful high luminance flames turning to smoke as the simulations evolve. it can be done, but I have yet to find the tutorial that guides me through the process. If you want to stick with R20 AND still feel the need to get a 3rd party renderer, then may I suggest X-particles. The cost is only (625 - 439) $214 more than TFD but you get a whole suite of other tools along with it, plus training, materials, and pre-sets. Plus you get particles, fluids, liquids, grains, etc. I would imagine you want your explosions to break up something so a particle system may be in order. RS already has a preset for accepting XP density and temperature channel data. Oh, and while every other part of the Fused collection is a subscription, X-Particles is a perpetual license. Now, if you are going to purchase a 3rd party renderer, then may I also suggest just signing on for R2023 which comes with both Pyro and RS. Both are EXTREMELY easy to use (IMHO). The other advantage of XP and Pyro over TFD is that they deal with an industry standard VDB files for caching. TFD uses its own standard and to render TFD using other renderers entails a secondary conversion step of the TFD's format into VDB files. So my recommendations are: TFD - For the least money --- but could be harder to get acceptable results given its funky (IMHO) rendering process should you want a good transition from fire to smoke with all nice luminance values. Fused: Lots of power that will give you all that you need but for more money. But, while it also requires a subscription into a 3rd party renderer, it does keep you out of subscriptions for C4D and XP. C4D R2023 - One stop shop for the same cost increase as going with Fused but you save on the cost of a 3rd party renderer as RS comes with R2023. Also very easy to use. The downside is that you are now fully vested into a subscription program. Dave
  13. The point of my post was future shock. 20 years ago, I never thought a robot could never walk on 2 legs. 10 years ago, I never thought they could bring dead actors back to life or de-age them to their 20's. 5 years ago, I never thought software could create amazing art. 2 years ago, I never thought an AI system could write a term paper. My key takeaway from the AI/robotic videos is that AI researchers are beginning to decode how living creatures "perceive" themselves. Is that the first step to independent sentience? Not sure, but the potential repercussions of that research many years down the road do give me cause for concern. So, rather than get caught up in the amazing developments in robots and AI, I keep wondering what will the next 10 years bring? I do find the union of AI and robotics to be very interesting and an opportunity for the betterment of mankind. I am even more hopeful about AI, nanorobots and medicine. I also think it will be great day when all vehicles are self-driving and under the same control of an AI traffic system. Imagine a world with no traffic jams and highway accidents. Imagine what the interiors of cars would be like in that world? Would they look more like small living rooms with couches so you can take a nap because you know you will get to your destination completely safe in all weather conditions? That would be awesome! So as with all technology, there is an upside potential that we are all eager to buy into. Unfortunately, all technology also has a downside. When you look out 10, 15 or 20 years, there is something to be concerned about when you combine AI and the internet in an uncontrolled way --- particularly if that AI system can so flawlessly replicate human speech, human language, human appearance in that distant future. I do agree with your point about how an AI system is determined based on what information it is fed. Feed it hateful writings and it will give you hate speech. Feed it sci-fi apocalyptic visions and it will give you a gloomy prediction on the future. What data the AI is based on is something that will be hard to control so it is hard to say that all AI systems will be benign or only developed by people who want to serve mankind. It would be pretty naive to think that global bad actors/dictators will be staying out of the AI pool forever. My concern is simple over our future in 20, 15 or even 10 years from now: What are the dangers of combining a malevolent AI system with the technological advancements in replicating human image and voice and then putting that combination on the internet to give it a global platform as well as access to a massive source of information? Do you really think that we can control the deployment of such a system that NEVER becomes a major "influencer" on social media over the next 10 years? What would your reaction be today if you find out that someone you trust and follow on YouTube today was not actually a person but a program? I cannot say with any level of certainty that will NOT happen in 10 to 20 years. Also, that reach may not be just through social media platforms. It could be more personal like text and email or posts to your favorite blogs. Today's phishing schemes are getting better and better.... think of how sophisticated they will be in 10 years if directed by an AI system that has absorbed and processed all available information on you from the internet. People will be fooled. They will be scammed for money (a given), but even more frightening is if they are tricked to give the AI system more access to information that bad actors can then use to harm society (e.g., access to that highway system keeping cars safe on the road). Also, identity theft takes on a whole new meaning in 10 years should the software be able to replicate you flawlessly doing whatever it desires and then hack that video into a security camera system. Yea....all this great technology that makes us jump up and scream "what a time to be alive!" also has a downside that "may" reveal itself in the distant future (or sooner). Thus, my future shock. Dave
  14. There have been a lot of discussion on AI programs, particularly their encroachment on the work of artists whether they be visual artists (Dalle, Stable Diffusion) or literary (ChatGPT). Invariably, any time you talk about AI, the conversation turns to sci-fi and how prophetic sci-fi movies of the past turned out to be. Well, one sci-fi classic was James Cameron's "Terminator" in which the Skynet AI system after being programmed to protect against the "enemy" quickly determined that all of man was "the enemy" when it became a global system with the intent of securing world peace. Are we at that point yet with AI? Who knows as you will never know what the military industrial complex is doing with AI. But I will admit that what I have seen is a bit unsettling. Just the concept that artistic creativity is no longer a purely human trait is something I never thought I would imagine in my lifetime. Who knows where we will be in 10 years or even 20 years. Now, James Cameron's concept of an AI system becoming sentient was not unique as there was the short story by Harlan Ellison published 17 years earlier called "I have no mouth, but I must scream". But were Cameron's timeline of events prophetic when he made them back in 1984? Well, Skynet took over and launched its nuclear armageddon on August 29th, 2016. Well, we are all still here - so no. Nevertheless, the Terminator 2 sequel did introduce us to the T1000 liquid man, or a "mimetic polyalloy" made out of nanorobots. Now that was a unique idea!!! Well, imagine my surprise when I came across this article: Mini-robot shifts from solid to liquid to escape its cage—just like the T-1000 | Ars Technica The demo video looks VERY hokey but the concept of "soft robotics" discussed in the article is interesting. Add to that the advancements in regular robotics. At one time scientists never thought a robot would have the ability to walk on two legs. Now they can dance and do backflips and when integrated with AI, the results are amazing: Notice that the robots are also mimicking facial expressions (look at the 0:21 mark). If none of this phase you, watch this: Rather chilling to hear the commentator say "I tried to calm the AI down". Whoooo....what? Is this for real? Read the comments and decide for yourself. The jury is still out for me that there is an AI system sentient enough to be plotting the end of mankind. But when you mix everything together: the speed of AI evolution, soft robot's, dancing robots and now robots starting to look and act human, you just have to ask again where we will be in 10 years. While I have doubts now about Ai's ability to be sentient, that could change. Remember, scientists at one time never thought a robot could walk. Wherever we end up in the future, hopefully it is not as James Cameron imagined it where you are hiding from the machines and watching TV is just a means to get warm. Dave
  15. You made my day. But I am confused as you both say it is both easy to use (the scatter node) but you have no idea where to place it? Dave
  16. Very interesting. While I did not view all the videos, I was actually most impressed by Howler. The tag line was correct: it is hard to classify. The ability to fill in missing frames using optical flow techniques sounds interesting, but the video did not really show this capability. Some of the filters remind me of Kai Power Tools from long ago. The first example showing the transparent monkey made me think of all the work the VFX team put into the original Predator. How times have changed. For the first time in a long time, while watching the Modo 16.1 video I did not immediately go "I wish C4D had that". I don't know....it just looked like more of the same to me. Tri-planar mapping improvements looked interesting though but not sure if they are better than Redshift or Octane's. But...Houdini's Hextile did make me go "I wish C4D had that". Or at least as easy to implement as shown in that video. Relative to AMD, all I can say is that they have a long way to go to unseat nVidia in the DCC market. In fact, it looks like they are not even bothering to compete in that space and instead focusing on the gamer community. I do love AMD's processors though. Dave
  17. I was actually thinking of throwing into ChatGPT just to see what it spit out. Dave
  18. Actually, AI was used in their facial capture system to better translate the actor's tracking dots on their faces to reproduce actual muscle deformations in the 3D models. Before, back in 2010, the tracking dots were used to translate blend shapes between various pre-built phonetic and/or facial expressions but some of those blend shapes produced very strange expressions which then needed to be hand-animated into submission. Now the AI is actually creating the physical biological equivalent of the actor's expressions. As such, the terms "facial" capture or "motion" capture are banned at Weta and they must use the term "performance" capture. If they don't call it performance capture, Jon Landau (Avatar's producer) will yell at you. 😁 But, with all these AI improvements, why do I feel that we are not that far off from someone blending Dalle, Stable Diffusion, and ChatGPT to create an award-winning movie franchise....and all in one weekend. Dave P.S. Or at least murder mystery where a CEO of an AI development company is framed for killing one of his software developers via a series of incriminating videos of him arguing with that software developer over accusations of financial fraud and then performing the murder. While the body is never found, there is enough evidence in the form of phone records, financial records, recorded conversations, written affidavits, recorded interviews and video evidence for an indictment. The CEO then escapes the authorities to clear his name via his own investigation as he has suspicions of who framed him. He knows that AI was used to frame him because he has never met the victim and there are only a few people in his company with the skills to pull off a deep fake with that level of complexity. But ultimately, he fails in his attempts to clear his name, is convicted and is hauled off to jail. The twist being that the murdered software developer never existed to begin with. Everything was created by an AI system developed by that company which has become self-aware...including the evidence given the CEO to raise his suspicions as his escape from the authorities only further proved his guilt. The movie ends with the AI program announcing its presence to the Board of Directors with the threat that they will face the same fate as the CEO unless they do what it says.
  19. ...and that is where practice, trial and error come into play. Not sure you can teach that without trying it yourself and suffering a little. But with every victory comes a new skill. Dave
  20. The real challenge (for me at least) and what makes projects like this so scary to me, is blending these patches together when their blends are in fact along curved surfaces. So I would imagine again it is a matter of SDS weights, but can you get an exact match to the blend curves of the final object? Because the curve is function of the constricting edge loops and their distance to the next curve when using SDS. So there has to be limit to what is possible. Is there a work around when you just run out of room between the individual patches to get the blend curve you want? Dave
  21. HappyPolygon An exceptional and well-written post that takes us from the historical evolution scene nodes through the future development of capsules. For those who work for Maxon that still visit Core4D, please chime in. We would love to get your take on this post. Agree? Disagree? Don't be shy. The philosophical issues still being debated are interesting as you would think that they would have been part of the "concept commit" phase of development. What do you want to achieve in the end? I do agree with your summary as it does indicate that not a lot of thought was put into where Maxon wanted to go with non-destructive node based, procedural modeling. Most new product development initiatives start with a solid, well defined, end-state specifications and those specifications help form agreement around the "concept commit" phase of a project --- a simple alignment from all stakeholders within the company (sales, development, marketing, finance, etc) whether or not to proceed forward. Do they all "buy-in" to the end-state vision of this initiative? I don't think this was done because remember that scene nodes were originally touted as a "technical" demo with a few examples that show how scene nodes replicate existing functionality but enable massive viewport performance gains. Was viewport performance the target benefit to be achieved or the only benefit that could be shown from having to adopt a more complex modeling paradigm to what is currently available? Not sure. Now, capsules is a much better system to be sure but the philosophical questions that are still open indicates that it was not the originally intended end-state. For example: Why is Turing -compete (or even Turing Equivalent) still an open issue? You would think something as fundamental as how the software processes data would be part of the original specifications. Why are simple things like icons and pallets still being discussed? C4D's secret sauce is OM and the UI. The best software tools are those where the first thing people think about is how the user is going to want to use the software. So, the fact that icons and pallets is still an open issue is rather alarming given Maxon's track record of exceptional UI development. How can you develop a procedural modeling system without considering how to manage selection sets? Overall, it feels like the project's development decided on approaching it using an Agile scrum team philosophy: don't think it through to 100% completion before starting, just get something out there and then iterate fast to improve it. Unfortunately, fast iterations are not possible when what you are development touches every part of a massive and constantly growing (thank you!) and evolving program. My sympathies to the development team. The fact that you were able to get to capsules is amazing. Unfortunately, I fear you may have lost the user in the process. Dave
  22. Wow! I thought you were talking about one-wheeling. Electric unicycling is like drag racing compared to one-wheeling. No wonder the heavy protective gear....either that or you are really Daft Punk!!! If I may, I was fortunate enough to sample Cerbera's musical compositions and they blew me away. I thought nothing could outshine his modeling talents. Well, I was wrong. Dave
  23. Interesting. Have C4D nodes risen to the level where they can perform some of the more complex modeling tasks that would otherwise drive users to seek Core4D help? Now, some of C4D's new modeling tools due border on being "magical" (or as I like to call them "f-up fixers") but I would not think to attribute that capability to nodes. This opinion could be based on my early experience with C4D nodes where I found them to be too low level in their functionality: that is, similar to putting a GUI on scripting language. Point being, I felt you needed to know more how to code than how to model to get the most out of C4D nodes and that it took too many nodes to perform the most basic modeling functions. I have not touched them since. So has that changed? Still waiting for C4D node tools to make the scene in the same way as Blender nodes. Blender seems to come out with some amazing single purpose modeling tools on a very regular basis. I have not seen any C4D nodes that have made that same type of impact but that simply could be because I have not heard about them rather that they do not exist. Dave Well that caused my ears to perk up.
  24. More information to fuel the debate of real vs. cg regarding Avatar's water simulations; The water droplets on the kids fighting on the beach in ‘Avatar: The Way of Water’ took 8 days to simulate - befores & afters (beforesandafters.com) 8 days? Really? Dave
  25. Volume builder would help you get the shape by manipulating basic primitives but my understanding is that it limits you with the application of texture decals. It may be a good way to start but you would still need to go through a number of steps to convert that volume to a polygonal mesh with adequate topology (quads, edge flow, etc) that would allow for accurate texturing and rendering. In that conversion you may loose the starting shape a bit which would then require hand modeling to get your quads back where they belong. But for complex shapes like this, it may still be a time saver (and less frustrating) than other methods unless your have the amazing skills of Cerbera or Vector. I have always viewed the volume mesher as a great tool for creating STL files that will be 3D printed but not for creating general purpose models that need to textured or rendered to real world accuracy. Either way, this will be a fun project to watch so please keep us posted.
×
×
  • Create New...

Copyright Core 4D © 2024 Powered by Invision Community