Jump to content

Leaderboard

  1. HappyPolygon

    HappyPolygon

    Customer


    • Points

      4

    • Posts

      1,822


  2. hvanderwegen

    hvanderwegen

    Limited Member


    • Points

      3

    • Posts

      593


  3. MighT

    MighT

    Developer


    • Points

      2

    • Posts

      400


  4. 3D-Pangel

    3D-Pangel

    Contributors Tier 2


    • Points

      2

    • Posts

      2,847


Popular Content

Showing content with the highest reputation on 05/11/2022 in all areas

  1. Join Jules Urbach as he presents his vision for the future of GPU rendering and how it will impact gaming, VFX, media, design, and cryptoart in the 2020s and beyond. He will discuss the roadmap for OctaneRender 2022 and The Render Network, the industryโ€™s first blockchain distributed GPU rendering network and 3D marketplace. Jules will also detail how the future of media lies in holographics, light field technologies, and real-time rendering, and how OTOY is working to help drive that future through OctaneRender 2022, Brigade, and The Render Network. The presentation also contains some nice new AI rendering features.
    2 points
  2. Version 1.0.0

    84 downloads

    Custom Icon Fix for c4d S26 - pc version. This version uses a mix of s24 and R25 Icons to complement only the missing icons with a slight color correction. Install: Place files here: C:\Program Files\Maxon Cinema 4D R26\resource\modules\c4dplugin\icons Don't forget to backup the original interface_icons.txt and interface_icons_2x.tif cheers
    Free
    1 point
  3. I knew that this trailer was playing at Doctor Strange Multiverse of Madness screenings, but it is now showing at IMDB Overall, I think I will love it. If anything, James Cameron knows how to craft a great story. But, after 13 years since Avatar, while I am seeing improvement in the animation, I am also realizing just how well the first Avatar holds up after all this time. I just thought the technology development after a 13-year gestation period would improve every part of the CG pipeline to such a degree that it would jump out at me immediately when I watched the preview. But you tell me. Have we come to expect too much from James Cameron? Will this movie move the whole industry forward as much as all his previous movies did? Or will it just be an amazing bit of storytelling once again immersing the audience in a new and distant world? Dave
    1 point
  4. The new trailer of Love,Death + Robots ! Do we have any of the contributors of the series amongst us to share facts ?
    1 point
  5. Oh duh! I suppose I ought to look before I put something like this up. Just noticed that Expresso provides a "Dynamics Collision" node which provides all the info I need. Well, maybe somebody else will profit by exposing my own ignorance. This is cool! I guess I've never seen anybody use it before and so assumed no such thing existed. Hooray! Collision detection.c4d
    1 point
  6. Sounds powerful. Might have to take a look. Professionally I will have to stick with Adobe, though. My graphic design colleagues will definitely not switch from their Mac-Adobe-Keynote combo they know from their beginnings But privately I might mix things up a bit... Thx
    1 point
  7. Many ways to convey death without gore. For example, in the first episode of the first season (Sonnies Edge), did we really need to see the old guy pick through the women's crushed head with his cane? In close up? Even GoT chose a wide shot after The Mountain crushed Prince Oberyn's head. And even then did not linger on the shot for too long. The crushing was also left to your imagination when all you could hear was the sound. But in Sonnie's Edge, you saw her head get stomped until her eye's fell out and then there was a long lingering shot as the camera trucked in for a close-up of her pulverized head twitching. Was that really necessary to the story? Was it entertainment? There were so many ways to convey the same sense of shock and horror to her death without graphically over doing it. Great sound effects and editing would have been far more effective. You imagination can create more true horror than the best graphics can provide. Just look at Hitchcock's shower scene in Psycho as a prime example of horror created through sheer craft with very little blood (only seen going down the drain) and NO gore. Honestly, I believe current filmmakers make the choices to punch up the gore so that they can be branded as "edgy". I heard the director of the 2019 Hellboy remake say as much for the end scenes when demons were just flaying people alive and they even marketed those scenes on their "sizzle reel" to attract interest (watch here if you must). The preview almost goes as far to say "We are R rated and gory!!! Isn't that great!!!!" Well...the movie still sucked and covering it in blood did not make it any better. Dave
    1 point
  8. Edit: The issue described in here got fixed by @darrellp in a post below. So, no more help needed. And thanks to darrellp fixing my stupidity (although I'm pretty sure, the fix won't last long). Ok, I have to admit, this did not go as planned. While cleaning up, I realized I have a bug in here. And I am too stupid to find it. While the performance is quite ok and the result looks good, the internal array is actually garbage... ๐Ÿ˜ข Anyway... I tried to annotate and comment the setup a bit, so it's hopefully not too bad to find your ways around. Here's the project to play around with: test_nodes_koch_snowflake_2_preallocate.c4d Fixed project file: test_nodes_koch_snowflake_2_preallocate.c4d And this is what it looks like: For any help in finding the bug, I'd be really grateful. Here's some more info about the issue: If one unlimits the Iteration parameter from lower bound 1 to 0, it gets more apparent. Iteration 0 (which is the plain triangle) looks the same as iteration one. What looked like a simple "one off" bug, turned out to be rather strange. Dive into the LCV and "data inspect" the array going in and coming out of the Koch step. Add the ingoing parameters to Data Inspector, too. The ingoing parameters, as well, as the ingoing array look exactly as I expect them. Unfortunately the resulting array does not. It shows the correct result one iteration too late and then funnily shifted to the end of the array. I don't get it. After looking into it like a pig into a clockwork (except most pigs are more intelligent than I am), I gave up and decided to post nevertheless. Maybe somebody can help me...
    1 point
  9. It's true that there's no current way to terminate an LCV. That will definitely be added. For now you have to use your solution with an extra Bool variable which remembers if you have already reached your final state. I did a similar technique to implement the Mandelbrot iteration. For the GCD case you can use 100 as a safe number of iterations, but you can also use a better upper bound for the number of needed iterations. If you look here in the worst case section https://en.wikipedia.org/wiki/Euclidean_algorithm#Worst-case you'll find the formulas N - 1 <= log_phi(b) and N <= 5 * log_10(b) where phi=golden ratio, N = number of required iterations and b = minimum of both values of which you want to compute the GCD. We neither have a log_10 nor a log_phi node (only natural and 2-based logarithm), but you can use log_phi(b) = ln(b)/ln(phi), so you can use ln(b)/ln(phi) + 1 as your number of iterations. Of course that's an "optimization" which is really specific to the GCD case. There are other cases where you can't estimate the number of required iterations, and not having a true while loop is waste of CPU time.
    1 point
  10. I am unable to replicate that in 26.014. Please could you upload a scene file that demonstrates the problem ? CBR
    1 point
  11. Drag it down directly with the left mouse button
    1 point
  12. Yeah, you can work nondestructive in PS, but it's not the same. Whenever you do stuff, you have to consider this. Smart Objects, Linking, etc. are the way to go, but very cumbersome. A simple example from daily, boring ArchViz stuff: Setup your stuff in AE (or just use an old project as base), maybe even do some retouching, because some gras patches don't look fine. The client wants an update. Just exchange the existing image sequences (if you use several perspectives, fe.) with the new ones. Even the stamp tool gets updated. Just re-export. Finished. You can't do this in PS as fast as in AE. If you only re-render a portion of the image, put that in a pre-comp. Done. Want five different color gradings? Pre-comp, put it in a new comp, color grade on top. Done. If the client want's another chair: Again, just exchange the sequence. Everything gets updated in an instant. If it gets more complex, it funnier with AE expressions. Linking effects to each other, and the like. Only drawback in AE: It's not so paint-friendly. So if you do lot's of retouching, better use PS. I mostly use AE these days. Biggest drawback for AE and PS: Both are unbelieveable slow and buggy. When starting up Affinity it's like a totally different world. Way more snappier, faster, etc. 10 min ago I had to fire up AE 2021 because the newest version just refused to render a simple 3D comp. That's just annoying. AE & PS sometimes feel like abandonware.
    1 point
  13. Not a contributor but here you can find some info: https://www.chaos.com/blog/the-render-tricks-behind-netflixs-nsfw-love-death-robots https://www.fxguide.com/quicktakes/love-death-robots-in-v-ray/ The series has always intrigued me but I have never had the chance to see it..
    1 point
  14. Hmm saw the trailer before doctor strange. Just not excited about this movie. More hyped for love death and robots vol 3 ๐Ÿ˜„
    1 point
  15. Have you tried enabling the Redshift top bar? In the top bar, you have the same commands.
    1 point
  16. I take it then that you've never opened Krita before? ๐Ÿ˜ Layer blend modes, anyone?
    1 point
  17. Quick update: I have a working version with a preallocated array. Now, the performance is more how I'd expect it. Still need to do a bit of cleanup before I post it here. Probably tomorrow.
    1 point
  18. The funny thing is that PhotoLine is only a few years younger than Photoshop - the first version appeared on the Atari ST! It is developed by two German brothers ever since. They never put any effort into marketing at all. I first happened on PhotoLine while looking for an alternative for Photoshop when Adobe went down the rental route with CS6. At the time, I had tried everything: from Gimp to Corel to Paintshop Pro and any other image editor I could find. Nothing quite fit my workflow as Photoshop did. One link to an obscure website mentioned PhotoLine, and I visited their page. The site looked terribly outdated (it has since been updated): I almost closed it, but after spending weeks of testing, what was one other badly functioning image editor? Downloaded it, opened it, and was pleasantly surprised by the layer stack: I literally laughed out loud at the genius behind allowing a layer, filter, or adjustment layer to have its layer opacity to go beyond 100% (up to 200%) and actually allow for negative value down to -200%. And as many layer masks as you wanted, great vector support, and so on. It had some unique features I had never seen anywhere before (it still does!). The GUI was pretty outdated looking, though, and barely configurable. And there were a lot of paper cuts. The devs have been incredibly receptive and responsive to requests over the years, and while the GUI may still not be the prettiest, PhotoLine has become a damn fine workhorse. Some things are just utter genius. It actually outperforms Photoshop in some key areas: for example, image mode and bit depth are layer dependent, rather than file dependent. And it even allows for layered 1bit bitmap layers, which is truly helpful. It is also possible to set it up for round-tripping layers and files to external software. Send a vector layer to Inkscape, edit, save, and PhotoLine automatically updates the layer. Or send a layer to Krita, paint, save, and again the layer is updated. Or send a layer for export directly to another external app. It's also (in my opinion) the most powerful alternative for Photoshop in terms of raw graphics editing capability, yet ironically the one least known about ๐Ÿ™‚. And to be fair, most users would probably be attracted more by Affinity Photo's GUI look, and granted, Affinity Photo is a great alternative for many. I have Affinity installed as well, but mainly use it for things like its focus stacking. Affinity has too many paper cuts for me and a number of show-stopping limitations (for my work, at least). PhotoLine is my main hub for image editing and still comps.
    1 point
  19. A fellow PhotoLine user! Don't be surprised that Photoshop is somewhat more limited in terms of non-destructive editing. I have been a Photoshop user since version 3.5 myself, although I no longer use Photoshop for my own personal or professional work if I can help it. The main issue with Photoshop is its legacy code base. Layer transformation is a good example. It is possible to non-destructively transform a layer, but only when the content is placed as -- or converted to -- a Smart Object (similar to Placeholder layers in PhotoLine). The drawback is that (just like After Effects and its pre-comps) the content can no longer be directly edited: to edit a Smart Object's content in Photoshop or a comp in AE, these must be opened individually in a new window. Even for something as simple as a simple pixel edit. In PhotoLine (as you are aware) this is not necessary: a transformed layer remains editable and the transform itself is non-destructive as well. Affinity Photo also allows for non-destructive "image" layer transforms, although those layers can again NOT be edited until those are rasterized to editable "pixel" layers. After that conversion the transforms are non-destructive. In short, Photoshop relies heavily on Smart Objects to do anything non-destructive. Any layer or groups of layers must be converted to a SO before a filter (outside the standard adjustment layers) such as a gaussian blur can be applied non-destructively. Both in Affinity Photo and PhotoLine this is not the case, of course. It is for example possible in PhotoLine to convert a layer or group of layers (or import an entire file either as linked or embedded) as a Placeholder layer and then apply almost any type of filter as a non-destructive one. And that includes many compatible third-party Photoshop filters! Just like Photoshop! So in PhotoLine the user is given a choice, and often it is not even necessary to convert a layer to a Placeholder (SO) layer. The main reason why Photoshop limits non-destructive filters via its Smart Objects is simple: performance. The senior developer (Chris Cox) at the time decided that non-destructive filters (unlike adjustment layers) would become too slow and degrade performance too much. (Which did make sense at the time.) And while this may still be true when working on heavy and complex layered comps/projects, for most regular jobs live effects like the ones in PhotoLine and Affinity Photo live effects work just fine nowadays with existing hardware. For very heavy compositing with high resolution files and high bit-depths (32 bit EXRs anyone?) it becomes essential to "pre-comp" stuff using Placeholder layers in PhotoLine as well. Also something to consider when working with Photoshop: the so-called "16 bit" image mode is actually a 15bit one. This is again a legacy core code part that was never updated after the 16bit mode first was introduced in the nineties of the previous century. The implication is that when a full-range 16bit render is opened in Photoshop (or a 32bit image downgraded to 16bit) that only half the values are retained of a full 16 bit mode. This is problematic for HDR work, for example, or scientific purposes. Even more problematic is that Photoshop does NOT warn its users against this. So a 0-1-2-3-4-5-6-7-8-9 ~ 65535 range becomes 0-0-2-2-4-4-6-6-8-8 ~65536 (the infamous "15bit+1" programming trick to simplify calculations and speed those up). Photoshop throws away the rest. I kid you not. Selections are always done in 8bit space. Which means your lovely 32bit or 15bit image part that was just selected, copied and pasted is severely downgraded to an 8bit range. This remains unfixed til this day. Many filters still refuse to work in 32bit in Photoshop. No such issues in PhotoLine or Affinity Photo. (Yes, the user must take responsibility for some of the filters downgrading the range of values, of course.) Which is all the result of Photoshop's aging legacy core. And there are quite a few features missing in Photoshop compared to PhotoLine, Affinity Photo, or Krita: multiple layer masks per layer? Sorry! A -200 up to +200 layer opacity value range (only PhotoLine does this!). Freely combining layers with content at different bit depths and colour modes? Sorry, need a Smart Object in Photoshop (PhotoLine allows the user to combine these freely in the same layer stack). And so on. Obviously I could point out a few things that Photoshop does better, but it's mainly down to its way of handling channels and of course plugin support. 3D is deprecated in PS and no longer counts as an advantage (in spite of the horrible implementation - good riddance, I say! ๐Ÿ™‚ ) All in all, I found PhotoLine to be the most flexible of image editors in terms of workflow and adaptability. Its layer stack is arguably one of the best. It has a number of tricks up its sleeve that no other image editor offers.
    1 point
  20. From the album: Matches' Makes

    An image made with C4D, Zbrush, and photoshop. Based on a design by my brother.
    1 point
ร—
ร—
  • Create New...

Copyright Core 4D ยฉ 2023 Powered by Invision Community