Jump to content

3D-Pangel

Contributors Tier 2
  • Posts

    2,877
  • Joined

  • Last visited

  • Days Won

    147

Everything posted by 3D-Pangel

  1. Kind of surprised to see R2024 not announced on the main Core4D page. Did I miss it? .....well...with the expansion to Houdini and Blender threads on the site and the loss of "C4D" in the sites title, I guess Maxon releases are not big news anymore. Even though I understand why...it is kind of sad actually. Dave
  2. 3D-Pangel

    Acanthometra

    It looks like Corona virus variant M98845-B soon to break onto the scene in 2027. Symptoms are redness around the earlobes and a strong desire for shish kabob. I love the shape though!!!
  3. Many thanks to HappyPolygon for these links. Who would have thought that a Sketchfab model would could be so impressive. They offered it in an FBX format as well and so I downloaded it and went to work. While not modeled in quads with any polygon flow to consider what-so-ever, the modeling was solid with excellent UV's. There was very little corrupted geometry for me to correct. I think it was game rip, so there were some details that were just not adequate as textures and needed to be replaced. The Engine intakes and radar dish needed to be replaced. Likewise, there was no landing gear and the corruption of the geometry around the landing bays was pretty bad so that needed to be replaced as well. Landing gear rigging was added. Engine glow was added. Interior cockpit lights were added. The whole model was converted to Redshift. And now the finished model: Very happy to get that monkey off my back. Now I can start on the DS tunnel. Dave
  4. Screw what the audience sees, whichever camera was "live" is where they switched the background to match that camera's position. Also, I agree that while the size of the screen and its "projection" method (front, back or self...meaning LED) is not defined, to do this all-in post is a bit of a challenge on a weekly show unless it is REALLY more fake than we all believe. Fake audience reactions, fake judge reactions, all of it. Plus, at some point particularly towards the end of the season, the acts start to appear on the same stage. None of that can be determined ahead of time because no one knows who those acts will be. So is it THAT FAKE (predetermined winners and losers) just because we can't accept the size of the screen? We accept real time rendering but we can't accept a big screen? Really? Dave
  5. Well...this video required a little inspection. Personally, I felt it was a little scripted...from the camera cuts (some appeared carefully framed) to the back and forth between the judges and the animation. Not saying that the whole thing lacked a "live on stage" believability, but something just did ring not as true. Now....how did they do it? Most likely Unreal Engine including the new hair rendering capability that came out early this year. Notice that the dog had a very binary movement to his mouth for the lip sync. Open or closed. That is all you got. While the body had deformation that was built into the rigging, the lips not so much. So while there are really only 8 basic mouth shapes used in animation, breaking down the "unscripted" dialogue into those shapes and morphing between them in real time may have been a "bridge too far" for real time animation. What the audience saw? While it was apparent that there was a screen on the stage that replicated the background of the real stage. The screen extended from the floor to beyond the ceiling. The stage design also helped because there were lots of vertical columns on the sides to hide the edges of screen and make the blends better regardless of camera angle. Now, if this was in fact live, then you would need multiple renderings going on at the same time to match the angles from each of cameras recording the performance. This was not a cheap performance. To cut down on the computational complexity, there was only one moving camera and that shot was for a very short time. It could have been added later for the TV broadcast. Note also the final shot of the dog walking off the stage. They cut it right before he broke the edge of the screen and cut back to the judges. All in all, great planning and quite the investment in time and equipment. I would imagine that this was a professional VFX house specializing in CG for stage productions who put this together. Even if they don't win, it will provide great PR. Dave
  6. The Sketchfab model comes in FBX and while i have not dived into it too closely, there modeling seems pretty solid (unfortunately, very non-quad, no edge flow, etc) but other than complex poles and boundary edges as all the detail is just slapped on...it looks pretty good. The only downside is that the UV mapping is a little off and UV re-mapping is my weakest skill. The big issue for me with the original model was that it would render incorrectly in Redshift which I attributed to the incorrect normals and corrupted geometry. It renders fine in Physical Render. Let me see how the Sketchfab model renders in Redshift. I want to use Redshift given the polygon size of the environments I will be putting it in (the outer Death Star). Also, I want to start working on the Death Star tunnel to the reactor chamber which will also be pretty heavy on geometry and lights and definitely in need of Redshift. I should be done in 2035 at this rate. Dave
  7. I have been running mesh checking but upon deeper investigation found that there are more boundary edges than expected and when you zoom in you find a duplicate cylinder just under the exterior cylinder by an amount equal to the thickness of the hull plating. It was as if the original modeler conducted a extrude on the faces but left the original faces intact (Lightwave must be okay with non-manifold surfaces). But not sure why that would cause a texture error when you reverse the normals on the outer layer. More work is required. Lots more. Ugh. If anyone knows where I can buy really good C4D model of the Millenium Falcon, let me know. Sometimes as hard as you try to restore something, the level of rot is overwhelming. Dave
  8. Some people restore old cars, I restore old models from other formats. One model I have been working on is: This model was originally done by Andy Crook in Lightwave and was graciously donated to the site Sci-Fi 3D for free download. The conversion to C4D was not easy as about 20% of the geometry was corrupted/or missing and/or needed to be redone. Overall, I reduced the polygon count by almost 37,000 polygons. I then rigged the landing gear, boarding ramp and engine power. But one problem still remains that I have not been able to solve: Note that with reversed polygons, it renders well under flat lighting though when you get to shinning a light on it with shadows in Redshift you get back-face culling issues. Reversing the normals gives you this: I have no idea how to resolve this. Dave
  9. Some names to consider: Clinton Jones (pwnisher on YouTube) - Founder of the 3D Challenges on YouTube and an amazing C4D Artist. Niko Pueringer and/or Sam Gorski: Founders of Corridor Digital. At one time they were both C4D users but have since moved on to Unreal Engine as well as many of the cutting-edge AI based apps (eg. Stable Diffusion, etc) in their quest to push the boundaries of making amazing digital streaming content on an indie budget. Their ability to innovate in this space and their views on AI's impact on CG art, the artists and the industry would be very interesting. Chris Schmidt (RocketLasso) - Great educator and master "nodesman" (not to be confused with Noseman). Just hearing his story from a GSG employee to a major independent C4D influencer would be fascinating. Andrew Price: Founder of Blender Guru and Poliigon Textures/Models. Honestly, anyone thinking about moving to Blender probably starts their journey at Blender Guru. His beginner's tutorial on making a donut with sprinkles is to Blender what Nigel Doyle's little blue jet was to C4D. What started as a hobby has now become two major elements of the Blender eco-system. Nigel Doyle - So how is life in the modo-verse? What has he been up to in the past couple of years (has it been a decade yet? --- Feels like it) and is 3D still a part that life. While he has been gone, I really don't think he will ever be forgotten --- that is a testament to the impact he made. Nick Seavert (JangaFX Founder) - Visionary, passionate developer of 3D software. Honestly, he is (IMHO) the "Steve Jobs" of the 3D software industry. Unfortunately, the addressable market for 3D software is a lot less than consumer electronics but I don't think that matters to Nick. Listening to him is always a treat as he tells it like it is. David O'Reilly and/or Mike Batchelor - CEO and CPO respectively of Insydium. Now that Nexus is fully rolling out what is next for Fused? What is next for X-Particles? Where do they see major threats or opportunities in the ever-changing VFX software industry? GET TO KNOW OUR MODERATORS - I would have to imagine juggling a day job to pay the bills and being a C4D moderator probably leaves very little time for things outside of C4D or their own personal projects. They've influenced many with their help and with their artwork but who or what influences them? Dave
  10. While not evident in this example, I also find problems if there are manifold surfaces and unoptimized points (e.g., two points in close to the same space). So run the mesh checker and look for any errors, fix them and then try again. Dave
  11. First off --- Many thanks to Happypolygon for the continual updates. This is becoming an amazing resource and one that I hope will keep going. Probably should be its own sub-Forum (IMHO). Also, I am rather impressed by Insydium's Fused - in particular the work done with Nexus. The foam shader examples look really good and the wave modifier looks both easy and powerful. The upscaling too across all simulations too is rather impressive. Now, I know that the Houdini faithful will have something to say but before you do, think about capability vs. learning curve. True, Houdini is more powerful and that is why no major VFX production studio is banking their next movie production on Insydium. But for the clod-kicking, knuckle-dragging hobbyist crowd like myself who have a day job, I find it very approachable and capable of delivering amazing results without requiring huge sacrifices in time or patience. Starting from ground zero (knowing nothing about either program) I am not sure I can get to the same level of quality with Houdini in the same amount of time and effort required by Fused. Just saying!!! Dave
  12. Been thinking about subscribing to Octane. $267 USD/year is pretty reasonable and (imagine my surprise) about equal to Redshifts annual subscription cost of $264 USD/year. What makes Octane attractive is its ease of use, but then again my biggest barrier to mastering any node based program is being able to understand which node to connect next given all the options. That flexibility is what makes nodal programs so powerful, but it also complicates the workflow (IMHO) and you fall into "analysis paralysis". So the ability to Octane to be context sensitive at the node connection level is very inviting for experimentation and therefore learning. In comparison, my original workflow for Redshift was to first start with what I know best, the Physical Renderer, and then convert to Redshift. That gave me the bones of what I want to achieve as a starting point. But I have to say that conversion is very crappy. So much is left out and the reflectivity was way off so it was a complete waste of time. Nevertheless, I would still have to agree that it does take a lot of work get Redshift to look right. There really are too many settings. I have yet to play with Octane so I have no idea of just how much easier it is to get acceptable results. But my journey with Redshift has been a struggle so anything that is easier and just as fast is very attractive to me!!! A word about the free models, plugins, etc that come with Octane. First off, Embergen is a huge draw for going with an Octane subscription. But I fear that will be coming to an end. As Embergen (and the soon to be released Liquidgen) has really taken off in popularity, JangaFX no longer needs that relationship with Otoy and may be pulling out of it. So keep an eye on that. Plus Kitbash3D is no longer part the deal as well. I think they pulled out ever since Kitbash3D launched their own subscription service called Cargo. I originally was very anti software subscriptions but have changed my opinion because the companies behind them (like Otoy and Maxon) are continually improving the software. Not so much a fan of model subscriptions where I can't use that model anymore once the subscription ends and it is not like they are improving the models every year. Cargo has been launched for a while now and they finally included their first "Cargo exclusive" model called Police Station. Looks cool...but would I use it? Probably not. Dave
  13. OMG...look at all those gloriously curved compound/complex surfaces!!! Amazing. The only way you could make that modeling work any harder on yourself would be to model it while looking at the monitor through a mirror, putting the mouse in the wrong hand and pointing backwards and completely forego any morning coffee!!!! May we see the mesh? Dave
  14. I have to disagree.....I rather liked the music. The models....not so much. Honestly, if you need an AI program to generate a block table (5 cubes slapped together) or a blocky sofa (more cubes slapped together with horrendous texturing), then you should probably re-think your life in 3D. Also, the AI is being trained mostly for furniture right now...so not quite the Dall-E of 3D. Plus, why even pursue this when photogrammetry programs produce far better results and are infinitely more versatile. Meshroom has nothing to fear. They are also focusing on the food industry as well with this AI program. I find this focus interesting as it seems to be aimed at 3D artists supporting the advertising industry. But both food and furniture are sold on their visual appeal so whatever you create in 3D needs to be spot on perfect and so far, the results are less than mediocre. What a minute! Is the benefit of an AI program creating 3D models of food and furniture the ability to create mashups of the two? Is that an un-tapped demand area in the 3D world? I never thought of that! Here is what the video demo produced when given "Chair with a cheeseburger texture." Enough said. Dave
  15. Not sure why it just couldn't be an unwrapped spherical map that we put on spheres every day in 3D. Apart from the Earth and moon images (which we put on spheres every day), every image in that video is 3D generated. Now, what I think is cool is that this has the makings of being the world's largest Stagecraft installation! Stagecraft is the brand name given to ILM's technique of creating a virtual "volume" using LED screens. What makes all Stagecraft stages a "virtual reality" is that the background is driven by Unreal engine so as the camera moves, that part of the screen visible to the camera is rendered in real time to reflect the parallax shift necessary to give the image a sense of depth. Plus the LED's illuminate everything on the stage so the lighting on the live action stage is perfect and blends with the background. So live action foreground and background on a spherical stage are now perfectly captured real time and no post work is required. Unfortunately, existing Stagecraft stages consist of 70-foot diameter "cylindrical" LED screens wrapping around the shooting stage with a flat ceiling of LED screens over that cylinder that continue the lighting but aren't the best for providing a background. There is always some post work required to blend the merge between the cylinder and ceiling screens should the shooting camera capture the upper corners of the stage. But this massive spherical stage would be the world's largest virtual shooting stage if the LED's screens were replaced with concave versions that pointed inward rather than outward. Honestly, you could point the camera anywhere because it is a sphere and not have to deal with edge blending on those cylindrical stages. Plus, given its size, imagine the sets you could put inside and action that could be staged! Now, the downside of something that big is getting the necessary resolution of your virtual backgrounds rendered in real-time. Would 16K backgrounds be required? Could that be done in real time? Not sure. Honestly, if the Vegas developers were smart, they would have built that sphere with LED screens on the outside AND the inside. If they did that, I am sure filmmakers would come running to shoot there as it would be far cheaper then shooting on location. Dave
  16. I have actually had more issues with USB drives than with other types of drives. They can fail to be recognized by that port if improperly removed or simply because there is some hardware conflict on your PC - which can come from anywhere. I have also found them to be more prone to physical damage than other drives. While this may not be an issue, every connector has an insertion life. For example, DIMM memory and CPU's are around 250 insertions --- simply because they don't want to put more than a few microns of gold on the pins - and you should not be removing them. USB insertion life is around 1500 insertions which is far less than USB-C and micro USB at 10,000 cycles. While 1500 insertions is probably more than you will need it does point to the fact that the USB contacts are not plated as thick as other type of connectors and therefore more prone to failure. Point being: I am more prone to insert and un-insert a USB port on my PC simply because I tend to use those ports more often. USB-C and micro-USB...not so much. Just a thought if the goal is long term storage. Dave
  17. I have been working on a WiP for quite some time now and have been posting renders in my gallery through that period. Now that I am about done, I want to delete those early WIP renders and replace with them the finished renders. But so far, when I go to "Manage by Album" I am left with the only option of adding new renders and I cannot figure out how to both select and delete older ones as the "Manage" function only shows the first image that was ever posted....and even there I cannot find any delete option. What am I doing wrong? Thanks, Dave
  18. Interesting question. The shortest and most accurate answer is "nothing is permanently safe". Over time, any storage media will degrade. Cloud services could go out of business. At home storage media could also become unreadable as technology will constantly change the manner in which digital media is saved (anyone still have floppy disks?). On top of all that is that the software meant to read those files may no longer be supported by future versions of that software (e.g., C4D R8) or an ever-changing OS over time (anyone still running Win97?). Now, you should also consider who you are purchasing your assets from. Do they have a "Customer Zone" where you can download your purchases when you need them? How many downloads do they allow you? Let the company store them for you as their longevity as a company is tied to their ability to protect their assets. Let them carry the burden rather than you. I favor Evermotion and Kitbash3D for just that reason. Kitbash 3D will also upgrade their assets to which you can benefit as they add compatibility to different rendering engines. They once did not texture their assets or have them be Redshift compatible but as they added those features, you benefit. Kitbash 3D now came out with Cargo which allows you to select specific models from their collections, so you do not even have to download the entire collection (easily around 3 to 4Gb). Now, I have issues with their new licensing model, but that is another discussion for another time. But if you want to take physical possession of the asset, then for reasons stated before about the longevity of technology leading to obsolesce, focus on how to keep a file for 10 years as a maximum (and that may be stretching it). But the beauty of that is that you probably won't want to even use that file after 5 years given how much work would be required to upgrade it to the latest rendering engine and/or software. So, if you want to manage the storage on your own, always consider some dual drive array. SATA drives have been around for 20 years and are still available on the market as external RAID 1 arrays capable of being hot swapped. So even though they are mechanical, you have redundancy. Also, it appears that the SATA format is pretty well established so the risk of obsolescence is less than others. Ideally, you would want RAID 1 SSD drives, but as I have yet to find them as an external drive you need to consider internal SSD drives that are hot-swappable. And from personnel experience SSD drives do FAIL so I would still want RAID 1 redundancy, but you need to reconsider a new PC purchase if your system was not already configured for those SSD drives to be hot-swappable. Now, why do I favor external drives or hot-swappable internal drives for "permanent" storage? Well, as an external drive you remove one source of problems in the recovery chain: the failure of your PC (and the probability that will happen significantly increases after year 3). It is a lot harder to recover files from an internal drive that is NOT hot swappable from a permanently damaged PC that won't boot. It can be done, but depending on the age of that PC, it could be a challenge in terms of connection and drivers. Not at all with external SATA RAID 1 drives connected via a USB and not as difficult with an internal hot swappable SSD drive. So, if you want all those benefits without the need to purchase a whole new PC, then external SATA RAID 1 array may be your only option. Now none of these recommendations take into account cost and how those costs compare to cloud storage. There are other threads at the forum on the pros and cons of cloud storage. Personally, I am okay with putting purchased models in the cloud, but I would never put financial files on the cloud...and financial files are far more important than 3D models...so all the concerns of protection and redundancy apply. Not sure I want my yearly tax files with my social security numbers on any document in the cloud. So, if I am going to protect those files locally, I might as well also protect my 3D files locally as well....and that concludes my argument on cloud storage. Dave P.S. Now if anyone knows where to get an external SSD drive in a RAID 1 configuration capable of having either one of the failed SSD drives being hot swapped, please let me know.
  19. Every image from the interior of the Death Star Bay to the widest shot of Death Star exterior was all done to the same scale. That inner bay detail could fit nicely into those far distant bays in the exterior shot. The camera could go from deep space with the equatorial trench in the distance all the way through the blast doors and down the hallway at the back of the bay in one shot. My intent for going 1:1 scale was to do just that: Start with a close-up of a ship as it passes by and then the camera pans with the ship to reveal the Death Star in the distance. After a short beat, the camera then follows it all the way into one of the bays - all in one shot. The goal was to sell the sheer size of the Death Star as you approach it which is a shot you never really saw in any of the movies. Everything was a cut-away to an establishing wide shot showing scale by making the ship smaller rather than a tracking shot following the same ship into something that was huge. Now, the diameter of the Death Star is 200 Km. But some of the ships are only 30m in size and some of the nernies on those ships are in cm's. So as the goal was to show the massive size of the DS relative to a small ship, you needed to preserve all those scales in the same space. So, it was a bit of trick as modeling something that big did create its own problems. Remember, everything in the trench needed to be instanced along the curve of the equatorial trench. So, at a 1:1 scale, the center of that curve was literally half a Death Star away. I do like to see how much I can push C4D and my own ability to problem solve with doing something out of the ordinary...and by my own admission somewhat foolish. As a hobbyist my excuse for the sheer folly of these challenges would be "I did not know any better (like compositing in the ship to the Death Star background rendered separately at a different scale) as I am a rank amateur." Professionals get no such excuses so they cannot throw caution to the wind. They have something to lose if they mess up (their job) and something to win if they do it economically (future employment). I have no such motivations other than to say, "hey it worked!" At over 270 million polygons, I think I pushed the C4D and my GPU pretty hard and found its limits. I also checked CPU usage and even though the file size was only 68Mb thanks to instances, it was consuming over 28Gb of memory. Animating will now be the next challenge.
  20. 3D-Pangel

    Distant Death Star

    From the album: Death Star Landing Bay

    This shows pretty much just how much of the Death Star exterior I modeled.
  21. From the album: Death Star Landing Bay

    Believe it or not, all those surfaces do follow the curve of a sphere at that scale.
  22. From the album: Death Star Landing Bay

    Guns in place
  23. Just found the time to watch the full Q2 live stream. The real time fluid meshing is astounding. And in a domain-less environment to boot!!!!! I know that there is more to do (viscosity, exporting, foam, ocean solvers, etc,) but so far it is just amazing. The real time fluid meshing and rendering against the HDRI is just incredible. Honestly, I did not know if I was watching a LiquidGen demo or a clip from "The Slo Mo Guys" on YouTube.😀 Please also consider a GRAIN solver!!!! Destruction effects in real time would be a game changer because iteration is required as their results can be unpredictable. Be sure to include the ability to control grain destruction with falloff fields and/or imported animated vertex maps. Speaking of maps, what is the thinking behind creating wet-maps? Let the host program do it based on the exported VDB file (based on collision detection) or export that wet-map from LiquidGen? Dave
×
×
  • Create New...