Jump to content

DasFrodo

Community Staff
  • Posts

    1,302
  • Joined

  • Days Won

    29

Everything posted by DasFrodo

  1. That sounds like you DO NOT like ACES. If so, can you elaborate a bit? Currently experimenting with it in my pipeline and honestly I can't see the benefit yet. I don't work with any real photography, just CGI.
  2. I hope you realize how valuable this is, thank you SO much for taking the time to explain this over and over! It's really problematic to research a topic like this if you're not sure what sources you can even trust. As you said many times, there's just so much conflicting information on the internet it's not even funny. Yes, I did type that out in a wrong way. Apologies 🙂 I have understood this part at this point, although it was really tough to wrap my head around this principle after almost 20 years of thinking that 100% transparent = no color information. I feared that this would be the answer... sadly I do not have the skill in these other softwares and neither do I have the time or budget to switch over right or the near future. This will be a "maybe in a year or two" thing. How AE is still so prevalent in the CGI space with all that bullshit going on is beyond me. I'm building my pipeline right now and I'm having NOTHING but issues with AE. Color Management is an opaque and a needlessly complex pain in the ass, Alphas are handled incorrectly, performance is bad. Man I wish I would have learned Nuke instead of this crap. At this point I don't even know what to trust anymore, my intuition and experience or some piece of Adobe software that should work but isn't exactly likely to work correctly, considering all the crap happening around their software.
  3. I have a feeling you have a LOT of experience that comes with the confusion on this topic, lol. Because I am still confused. I tried downloading the test image you posted and setting it up in AE. Either AE's Alpha handling is bad or I didn't set it up correctly. This is how it looks right now, on the right I opened the "CCSkull_06_d-sRGB_t-ACES-sRGB.jpg" from the Repo as reference. Some of the colors look really close, some look totally off. Of course, the entire candle flame AND the glow is missing. But with color management "close" is not good enough, so what am I doing wrong?
  4. As far as I know the best way to comp your glow from renders is just don't. Getting the glow to correctly export from render engines is always a major pain in the butt for exactly this reason and at this point I just do it in post via AE or some playing around in PS. As said above Red Giant has some really good Glow plugins for AE, and they're part of Maxon One now.
  5. I'm even more confused now. Wikipedia says this: So Wikpedia also says that Straight Alpha is the one that has the emission independent from the Alpha. Also, apparently Premultiplied Alpha IS multiplied with the Alpha value. It would make sense, since a 50% "covered" 70% green is 35%, because half of the emission is absorbed by the coverage. I think I understand the issue now. The Blog is simply about the BLENDING operation with premultiplied alpha values. It has nothing to do with the images themselves. It is simply easier to do calculations with premultiplied images, since straight alpha images need to be multiplied with their alpha by the renderer at runtime to get premultiplied values needed for blending. Premultiplied simply comes with the multiplication integrated already. I could have thought of this sooner, since a couple of pages before is talking about how Photoshop internally uses straight alpha calculations and thus ends up with wrong colors. Photoshop only blends correctly in 32 bit mode, since it uses linear premultiplied maths then. At least that's how I understood it. tl;dr: Doesn't really matter, just tell your software what you're feeding it with and it should be fine. Also use straight or premulitplied according to what you want to do, like @mash said here: https://www.core4d.com/ipb/forums/topic/119202-premultiplied-vs-straight-alpha-worst-rabbithole-i-have-ever-been-in/?do=findComment&comment=764541
  6. lol I think I've stumbled over this exact blog at some point. That said, I'm not a fan of PNGs either unless I'm rendering a still that just needs some color adjustments. Thing is, all that you guys said is what I thought as well, but that blog seriously confuses me. If this man is to be believed (and I think you can, since he uses tons of credible sources that I've personally double checked) and I did not misunderstand it is exactly the other way around, hence my scepticism towards the naming in C4D. Let me quote some stuff from the blog: ... which would make absolute sense that it would look like a STRAIGHT alpha in the images I posted above and absolutely NOT like the premultiplied images above. ... which also supports the idea that premultiplied is NOT the thing that has the background or anything else baked in, but is rather "lossless" I know that ALL sources, including ChatGPT / Copilot tell me that is NOT the case, but the general confusion around the terms that you can feel everywhere has me totally bewildered. It doesn't help that you can find some information on AE and PS simply not really working in premultiplied math but instead internally doing straight alpha calculations and more or less barely getting the right result. Some of these threads are old though, so I have no idea if this is still the case. I would believe so though, since we're still taking about Adobe. Especially what you posted @HappyPolygon. If I did not TOTALLY misunderstand the blog and if it is truthful, the values that ChatGPT gave you should be EXACTLY the other way around, since premultiplied is supposed to have the color values independently from the Alpha, and straight Alpha is supposed to be the exact opposite. Especially with language models like that and how unreliable they are I could totally see it picking up a wrong narrative that has been widely spread on the internet by people just misunderstanding what "premultiplied" actually means. Because if you think about it, premultiplied sounds EXACTLY like what straight alpha supposedly is. I'm just questioning if maybe, just maybe, almost everybody is using the wrong terms in this case, which wouldn't surprise me in the slightest. Thank you for that super valuable input, that is going into my own little documentation for reference in the future 🙂
  7. So for the last couple of days I've been trying to get really deep into digital color science and all the baggage that comes with it. This is all in preparation for upcoming projects and the desire to understand this topic once and for all, at least the basics. So far everything has been working out, from Input Transforms over ACES to Color Spaces etc. This all changed when I got the good old topic of Alpha (oh god help me please) As far as I understood now, and from a seemingly very knowledgeable source, there are basically two types of color encoding with Alpha: Premultitplied Alpha / Premultiplied Color / Associated Alpha Straight Alpha / Unmultiplied Alpha / Unassociated Alpha Before I start, we have to fundamentally clarify two things, important for terminology: RGB describes the amount of color emitted. Not the brightness, or how strong the color is, just the amount of color that is "emitted" from your screen, for each primary color. Alpha describes how much any given pixel occludes what is below it. tl;dr: RGB = Emission, Alpha = Occlusion Premultiplied Alpha ... probably has the dumbest name ever, because intuitively you'd think something is multiplied here, right? Well, that's WRONG. The formula for blending with Premultiplied Alpha looks like this, where FG is Foreground and BG is Background: FG.EMISSION + ((1.0 - FG.OCCLUSION) * BG.EMISSION) What this comes down to is that premultiplied basically saves the brightness of each color independently from the Alpha, and the Alpha just describes how much of the background this pixel will then cover. This means that you can have a very bright pixel and it's Alpha set to 0, so it will be invisible, but the information will STILL be there even though the Pixel is completely transparent. Blending works like this, where foreground is our "top" layer and background is our "bottom" layer that is being composited onto. Check if the current pixel has some kind of occlusion (Alpha <1) in the foreground Scale the background "brightness" or "emission" by the occlusion value (Alpha) (BG Color * FG Alpha pretty much) Add the emission from the current pixels foreground (BG Color from 2. + FG Color) Straight Alpha ... is considered to be a really dumb idea by industry veterans, and often not even called a real way to "encode color and Alpha". The formula looks like this: FG.EMISSION * FG.OCCLUSION) + ((1.0 - FG.OCCLUSION) * BG.EMISSION) What this means is that Straight Alpha multiplies the pixel emission by the occlusion (Alpha), as opposed to having the final emission of the pixel independently saved from the Alpha. If you've every opened a PNG in Photoshop this is pretty much exactly what Straight Alpha is. There is no Alpha channel if you open a PNG in PS, just a "transparency" baked into your layer. All the pixels that are not 0% transparent are not their true color, as Premultiplied Alpha would describe it. I have not read this terminology anywhere, but personally I would kinda call this a "lossy" form of Alpha, since the true color values are lost and are not independent from the Alpha, unlike Premultiplied Alpha. Why am I telling you all this? Fundamentally I just want to check if I understand this concept, because there is so much conflicting information on the internet it's not even funny. I am so deep in the rabbithole right now that I question if some softwares even use the terminology correctly, and C4D is one of them. You know how C4D has this nice little "Straight Alpha" tick box in the render settings? Well, according to the manual it does this: Am I completely crazy now or is this not EXACTLY what I, and the Blogpost I linked above, describes as Premultiplied Alpha? Because we have RGB and Alpha as separate, independent components? Another example, if you just search for "Straight Alpha" on the internet, you might find this image: This is the same story as above. Doesn't the Straight Alpha example look exactly like Premultiplied Alpha, and the example for Premultiplied Alpha like what Straight Alpha really is? I truly feel like I'm taking crazy pills here, and I hope someone more knowledgable in the whole Color Science / Compositing field with can tell me where the hell I am wrong. Did I misunderstand how these two concepts will actually look like in practice, did I miss some important detail, or is there just so much misinformation about this topic EVERYWHERE? If you've made it to here, thank you for listening to my ramblings. I hope I can be enlightened, otherwise this is going to keep me occupied for forever...
  8. That is exactly what I thought I was doing wrong but I didn't know how to do it or fix it 🙂 thank you!
  9. Hey bezo, thanks for the reply. I don't quite understand what your second approach is. Can you elaborate a bit more please?
  10. Hi community, long time no see 😄 Due to circumstances (life happens ey?) I will probably be returning to C4D. Just to have a look at the new features I have missed out since R21 (which is a lot) I'm trying to do a bit of Scene Nodes. I have some stuff already working, but I'm currently stuck at iterating through multiple children. I have a very basic setup where I want to clone some spline onto the points of multiple objects that are children of the Nodes Spline. However, the spline only gets cloned onto the first cube. The second one is, as you can see, slightly offset. However, even if I change the order of the cubes nothing changes, so I suspect that I'm missing something to do with transforms / matrices / positions. Unfortunately my google skills either suck at this point or not many people are posting about Scene Nodes on the internet. Any pointers, please 🙂 ? Thank you!
  11. This is why I love InstaLOD. It's not AS important for offline rendering but if you're going to create realtime ready models, like we do right now mostly, then it's your best friend. If I have a product, let's say a motor with a bunch of screws and hex nuts and whatnot, I will select the main body and set the max sag to the highest I can get it, and solely control the detailing by the max angle, which I set very low. Then for the smaller details like screws, I set the angle to something like 22 degrees and control mostly by lowering the max sag. This way I have precise control over which parts get how much detail. In my experience controlling with max degrees works better with cylindrical and "organic" shapes. What working with max angle as "main constraint" also helps with is that cylinders that are inside each other will get subdivided evenly, instead of unevenly. What I mean by that is this: Subdivision controlled by max angle, subdivision is even Subdivision controlled by max sag, subdivision is uneven Controlling the subdivision with max angle is scale independent since a cylinder will always be 360° no matter if it has a radius of 500m or 5mm Controlling the subdivision with max sag is scale dependant since it looks at the distance between the CAD model and the generated geometry So if you want perfect overlapping cylinders without the disgusting artifacting from above, use something that nicely divides 360° like 45, 22.5, 11.25, etc. for max angle and don't use max sag or turn it up so high that max angle is mostly the more aggressive setting that decides the geometry shape The cool thing about InstaLOD is that you can do this for every single part in your assembly, and get instant feedback. C4D just imports the entire thing in one detail setting wether you like it or not. So you end up with either reimporting and mixing and matching (which can take A LONG time depending on the size of the STEP) or living with not so great subdivision. It cannot be understated how much a tool like InstaLOD helps making great CAD conversions.
  12. If you want REALLY good CAD conversion I can recommend InstaLOD. It allows you to have fine control over every single object in the STEP file, set detail settings for every single one, automate repairs, clean up the materials (so you don't end up with metal.01, metal.02, metal.03, metal.04, ...). etc. There is a free version currently, but you have to request a license manually. https://instalod.com/fsl/ We're using the tool for over a year now for all our CAD data needs and you wouldn't believe how much time and nerves this thing saved us.
  13. I don't think people understand how futile this backlash is. This stuff is going to happen, and there is absolutely nothing we can do about it. I'm not even trying to be cynical here. Imho that's just how it is.
  14. Uh uhm, yeah. I used goddamn once in a post. I am swearing so much. Please, think of the children! Speaking of "thread cops". You do realize that I'd have to arrest myself, do you?
  15. I'm just going to pretend I did not see the last sentence, but I'm inviting you to reread what I wrote, and count how many times I "swore".
  16. No, this is on Maxon. If it is that hard to find where to give feedback or report bugs, then they failed. Guess what pops up first when you Google for "c4d give feedback". This forum, this thread. With non personalized search engines like duckduckgo it's even worse. If you search for "C4D bug report" once again you find nothing usable. The first thing you findi s a goddamn link to Chaosgroup and how to report bugs for their C4D plugins. It should not be this hard to give free feedback to a company for a software that people pay money for. In a hypothetical scenario that some user wants to give feedback, what do you think, how long will they look for a page on Maxons site to give feedback? 95% will drop off immediately after they don't find it in the first 5 google results. It is absolutely beyond me how Maxon moved the entire Redshift community to this forum and completely ignored the overall C4D community. The situation is bad. Thing is, all Igor is doing is trying to keep this site alive. It's the only reason why we have the paywall.
  17. I doubt they will do that, and if they do, it's not going to last long. Nobody in the Blender Ecosystem will pay this much for the tool. Well not nobody, but definitely not enough to make it worth the development cost.
  18. Heh, seems like they're branching out. Smart decision I think. Them bringing their tools to Blender is huge, because it's still lacking a good particle system / tool.
  19. Idk what's happening there, but it looks weird 😄
  20. Something weird is happening with the teeth there I think?
  21. That's... interesting. Last time I searched for it I either didn't find anything or I found something with it that makes it useless for what I needed it. All I can remember from last time I tried is that the transform tools were not sufficient for me.
  22. I miss the transform tools from Photoshop. Especially Perspective and Freehand Transform. These are unfortunately dealbreakers for me in many cases.
  23. https://parsec.app Stupidly easy to set up. works with a surprisingly shitty internet connection (~10mbit up on the sending end is already enough for decent quality). Almost no input lag. If you have a good connection you almost don't feel like you're using a remote PC. I've mixed up my local and remote PC by accident multiple times when I had Parsec open.
  24. lol I still remember when I tried out Houdini a while back. Looking at a node, opening a dropdown, reading. "What the f*ck is a PolySoup?" Houdini has a couple of these really weird special words for things 😄
×
×
  • Create New...