Leaderboard
Popular Content
Showing content with the highest reputation since 04/23/2025 in Posts
-
Not particles or pyro, but still fun - field and volume based cheese 🙂 167_OpenVDB_cheese(MG).c4d1 point
-
Deformers are multiplicative when added on the same object. You need to add each new translation to the one before it. Use the Connect Object to freeze the object's form and then apply the new deformation. You can also use the regular Cube instead of the Extrude... I just kept going with it when trying to find the solution to this. Try this now the Cube is skewed from all 3 directions with no 90 degree angles You can use a Connect Object instead of the Edge to Spline plugin. It just helps me deal with the absence of the appropriate capsule. Rombocube.c4d Question: What does the Rhombohedron has to do with special relativity ? Or do you also teach other classes like analytical geometry ?1 point
-
1 point
-
request from the scene nodes discord server... a quick and dirty method of creating random poly islands: 11-sn-random-poly-islands-BASIC.c4d1 point
-
Actually that was much easier than I expected. You just have to relate the position of your boat to the position of your material tag. In your case you'll probably need more than one material tag. The one with the water properties and one with only the displacement. You mix the two (Add Material option), and don't forget to disable Tile for the displacement material tag. It also have to be set to Flat Projection.1 point
-
I think you can easily do what you want using a Shader Field. Plug that texture to it and have ... Oh wait you need the texture to move .... Hmmm... maybe same as above but use the Field to generate a vertex map ... and then the vertex map as a Shader (Vertex Shader) to re-convert it to a texture. Don't forget to make the field child of the moving object... Unfortunately you still need a lot of polygons. I guess this can also work using XPresso if you relate the Offset U and Offset V of the material Tag to the position of the object but converting from absolute values to percentages always gives me headache .1 point
-
Very pleased to hear that Rocket Lasso is back in action after a very long time away, starting with Season 7 Ep 1 tonight at 8 pm (BST). CBR1 point
-
So for the last couple of days I've been trying to get really deep into digital color science and all the baggage that comes with it. This is all in preparation for upcoming projects and the desire to understand this topic once and for all, at least the basics. So far everything has been working out, from Input Transforms over ACES to Color Spaces etc. This all changed when I got the good old topic of Alpha (oh god help me please) As far as I understood now, and from a seemingly very knowledgeable source, there are basically two types of color encoding with Alpha: Premultitplied Alpha / Premultiplied Color / Associated Alpha Straight Alpha / Unmultiplied Alpha / Unassociated Alpha Before I start, we have to fundamentally clarify two things, important for terminology: RGB describes the amount of color emitted. Not the brightness, or how strong the color is, just the amount of color that is "emitted" from your screen, for each primary color. Alpha describes how much any given pixel occludes what is below it. tl;dr: RGB = Emission, Alpha = Occlusion Premultiplied Alpha ... probably has the dumbest name ever, because intuitively you'd think something is multiplied here, right? Well, that's WRONG. The formula for blending with Premultiplied Alpha looks like this, where FG is Foreground and BG is Background: FG.EMISSION + ((1.0 - FG.OCCLUSION) * BG.EMISSION) What this comes down to is that premultiplied basically saves the brightness of each color independently from the Alpha, and the Alpha just describes how much of the background this pixel will then cover. This means that you can have a very bright pixel and it's Alpha set to 0, so it will be invisible, but the information will STILL be there even though the Pixel is completely transparent. Blending works like this, where foreground is our "top" layer and background is our "bottom" layer that is being composited onto. Check if the current pixel has some kind of occlusion (Alpha <1) in the foreground Scale the background "brightness" or "emission" by the occlusion value (Alpha) (BG Color * FG Alpha pretty much) Add the emission from the current pixels foreground (BG Color from 2. + FG Color) Straight Alpha ... is considered to be a really dumb idea by industry veterans, and often not even called a real way to "encode color and Alpha". The formula looks like this: FG.EMISSION * FG.OCCLUSION) + ((1.0 - FG.OCCLUSION) * BG.EMISSION) What this means is that Straight Alpha multiplies the pixel emission by the occlusion (Alpha), as opposed to having the final emission of the pixel independently saved from the Alpha. If you've every opened a PNG in Photoshop this is pretty much exactly what Straight Alpha is. There is no Alpha channel if you open a PNG in PS, just a "transparency" baked into your layer. All the pixels that are not 0% transparent are not their true color, as Premultiplied Alpha would describe it. I have not read this terminology anywhere, but personally I would kinda call this a "lossy" form of Alpha, since the true color values are lost and are not independent from the Alpha, unlike Premultiplied Alpha. Why am I telling you all this? Fundamentally I just want to check if I understand this concept, because there is so much conflicting information on the internet it's not even funny. I am so deep in the rabbithole right now that I question if some softwares even use the terminology correctly, and C4D is one of them. You know how C4D has this nice little "Straight Alpha" tick box in the render settings? Well, according to the manual it does this: Am I completely crazy now or is this not EXACTLY what I, and the Blogpost I linked above, describes as Premultiplied Alpha? Because we have RGB and Alpha as separate, independent components? Another example, if you just search for "Straight Alpha" on the internet, you might find this image: This is the same story as above. Doesn't the Straight Alpha example look exactly like Premultiplied Alpha, and the example for Premultiplied Alpha like what Straight Alpha really is? I truly feel like I'm taking crazy pills here, and I hope someone more knowledgable in the whole Color Science / Compositing field with can tell me where the hell I am wrong. Did I misunderstand how these two concepts will actually look like in practice, did I miss some important detail, or is there just so much misinformation about this topic EVERYWHERE? If you've made it to here, thank you for listening to my ramblings. I hope I can be enlightened, otherwise this is going to keep me occupied for forever...1 point
-
1 point
-
I think you may have this backwards. In plain language: Straight alpha: Stores the full RGB colour for each pixel, but ignores how transparent it may or may not be. ie the transparency of a pixel has no impact on the colour stored. This means for example if you render a white cloud with a soft whispy edge in a blue sky, the rendered cloud will only contain white cloud colours, the blue of the sky will not be present in the rendered cloud, even where the alpha transparency eats into the cloud. Premultiplied: This simply means the image being rendered has already had the background baked into the rendered colour data. In the cloud example it means that it will start to turn blue as the edge of the cloud becomes more transparent. In practical terms, straight alphas can be great because there's no bleeding of the background into the visual RGB data, you can take your white cloud and throw it onto any background you like, there wont be any blue from the sky creeping in. On the other hand If you place your premultiplied cloud onto an orange sunset background, youll get a blue halo around the cloud, which sucks. However.... It isn't all roses. Sometimes you need the background colour to be baked into the transparent edge because some things are just flat out impossible to render due to the number of layers present or the motion on screen. Here's one which screws me over regularly; what happens if I have a 100% transparent plastic fan blade, but the fan blade is frosted. And in the middle of the fan is a bright red light. Visually the fan blade has the effect of looking like a swooshing darth vader light sabre. Its bright red and burning white from the brightness, but whats there? a 100% transparent object.... The alpha channel with a straight alpha will obliterate my rendering, its 100% transparent plastic. You can see it, but the alpha channel decides to ruin your day and now the rendering is useless. The only option here is a premultiplied alpha where the background is baked into the motion blur and SSS of the plastic fan blade. Sure, I need to make sure my 3d background somewhat matches my intended compositing background, but its the only way to get any sort of useful render. Same goes for motion blur, DOF blur, multiple transparent layers in front of each other (steam behind glass) The honest answer is, use whichever one is least likely to screw you over. If you have lots of annoying transparent/blurry things to deal with, go premultiplied but plan your background ahead of time. If you want clean alphas on a simpler object render, go straight alpha. I haven't read your linked blog all the way through, but I will say... there are an abundance of wrong people loudly proclaiming themselves to be fonts of all knowledge. Theres one in the octane community who insists on inserting himself into literally every thread on the entire octane forum to tell you youre an idiot for using a png file, he has 100's of blog pages which are a strange mix between 3d rendering and flat earth magical woo woo maths to show everyone just how right he is. That said, your rainbow example does match up with what the blog says. The only difference is the blog seems to think the straight alpha is evil and you should only use the premultiplied, whilst I would say both have their uses, with straight being preferable when possible.1 point
-
interesting release... one thing I noticed right away is why the spline modifiers and generators have the word 'modifier' and 'generator' after them? there is obviously no need for that as we know its a generator and modifier from its icon colour. also just like the 'Spline Wrap' modifier, the word spline should be at the beginning for eg. 'Spline Branch', 'Spline Break' etc etc... this ensures naming consistency with the rest of the modifers/generators as well as conciseness and being DRY(don't repeat yourself)... I feel like there was a lack of attention to detail there before release... and maybe if they plan on adding more spline tools, it might start deserving its own seperate category like the spline shapes...1 point
-
@3DKiwi Nice to see you hanging around here, was wondering if you are still somehow in 3D world : ) Still biking?1 point