-
Posts
692 -
Joined
-
Last visited
-
Days Won
54
Content Type
Profiles
Blogs
Forums
Gallery
Pipeline Tools
3D Wiki
Plugin List
Store
Downloads
Everything posted by Mash
-
For amd systems, you want to aim for 3600mhz specifically, this is because the memory is ddr, double data rate; which means its base clock speed is exactly half, ie. 1800mhz. 1800mhz is the sweet spot because thats the maximum stable memory bus speed of x570 motherboards, essentially the point where youre getting great performance whilst also remaining stable. After that is the CL, cas latency, this is how short the intervals are between sending new data back and forth, the lower the better, but anything below CL18 is fine, CL14 tends to be the lowest it goes, but larger memory sticks tend to be higher on the scale due to the extra overhead of handling the higher capacity. Don't increase the voltage, there's no reason at all to do this outside of enthusiast overclocking, all you need to do is get the memory running at the rated XMP profile speeds, the memory and motherboard will negotiate the voltages themselves. As far as "theyre made in china" goes.... your entire computer was made in china. And your phone, and your tv, and your laptop, and your car's electronics... If we're avoid chinese electronics then you're going to be in a cave smashing rocks together. For 128gb kits, The gskill one you linked will be fine, but looking at the specs its very likely to be the same chips as this corsair kit: Corsair 3600mhz cl18 - CMN128GX4M4Z3600C18 https://www.corsair.com/uk/en/Categories/Products/Memory/VENGEANCE-RGB-RT-Black/p/CMN128GX4M4Z3600C18#tab-tech-specs They aren't on the validated list yet because theyre brand new, but they are tested and will be compatible. One question, did you update your motherboard bios? this would always be my first action after getting memory errors which arent fixed by setting the memory down to default settings.
-
Honestly, to get any sense of quality differences you need to specify which exact product youre talking about, and I mean the exact product SKU code, because even different batches will vary over time. Ram gets graded into different specs. The stuff that doesnt perform so well at high speeds gets turned into value/budget sticks. The good stuff will become the mainstream product and the very best chips get turned into their premium high speed product, the same goes for all memory manufacturers. Asking about "gskill 3600mhz ddr4" memory, is like asking what I think about the red hyundai car with 4 wheels. Even the memory capacity is going to make a difference because different size sticks use different dies and ranking.
-
Maxon's Spring 2022 Launch Event | Live Stream | S26 Announcement
Mash replied to HappyPolygon's topic in News
One thing I haven't seen mentioned (and to be fair, maxon seem to have skipped over it) is that you can now work over the network at more or less full speed. Loading and especially saving is now multiple times faster than it was before. A quick test here over a 1gbit network Saving 800mb c4d file over gigabit network R25: 32 seconds R26: 9 seconds If you have a faster network connection then the speed improvement becomes even higher. With a 10gbit network you can expect file transfer times to be up to about 10x faster than in R25. -
The seller is telling you that his slower product is just as fast as a faster product, not sure I'd entirely believe that. On an AMD system, higher memory speeds will make a difference due to the way the system accesses memory. How much of a difference there is depends on the other specs of the chip (dual rank, CL latencies etc) but all else equal, you would get a 5-10% difference between them. I genuinely would happily buy corsair memory in future (ive used them for every machine Ive ever made, including my current machine) but, I work for them, so obviously don't trust me.
-
I dont believe theres a single laptop you could possibly buy that doesnt support external monitors. where di you read that?
-
Only possible option you have now is to right click the file and select "restore previous version", but that only works if you have a backup drive enabled. Otherwise youve just learnt to not keep endlessly overwriting one single file. Its the ultimate eggs in one basket scenario.
-
Holy crap, they still make Dopus!? I used to use that on the Amiga 25 years ago, it was amazing
-
The GB Aero seems superior in almost every way, an easy choice imho.
-
Keep in mind winzip and winrar are commercial apps. They will keep working after the trial ends but you will get popups you have to dismiss. 7zip is completely ad/payment free. Just one bit of advice, run 7zip once as admin, then turn off all the right click menus you dont need in the preferences, as it adds quite a few by default
-
But you can. Just right click any file and select "Send to -> Compressed (zipped) folder" But the answer to your question is 7zip. https://www.7-zip.org/
-
Thats the problem with Octane. If youre happy with its render speed, visual style, and how quickly you can work up materials and lighting, then anything else is going to be a bit of a letdown. Arnold is far more stable and has some more complete and production ready features, but the work done with it is dominated by those using cpu-based farms. Redshift gpu in my experience simply isn't faster to render than Octane. It is better at doing quick cheats like simple fog vs octanes mega slow volume mediums, and faint transparent objects struggle less in RS than Octane, but Redshift completely falls apart on any system with more than 3-4 gpus. 12 gpus in octane is 12 times faster. 12 gpus in redshift... a) it doesnt support 12 gpus, b) of the 8 gpus it does support, it will only be 4 times faster So its fine on a workstation, but rendering anything serious with it gets quite annoying because you end up needing multiple weak nodes instead of having a small number of powerful machines.
-
Maxon's Spring 2022 Launch Event | Live Stream | S26 Announcement
Mash replied to HappyPolygon's topic in News
Regarding redshift cpu/gpu. C4D standalone now includes redshift on your cpu, just select it as the render engine and the interface will switch to redshift lights, cameras and materials. If you have maxon one or a redshift subscription, c4d will continue to use the gpu in your system, you haven't lost anything As far as speed goes, currently expect your gpu to be roughly 10 times faster than your cpu on a balanced system. eg 16 core ryzen + 3090, 12 cores + 3080, 8 cores + 3070, 6 cores + 3060. If you have a gpu licences of redshift, there is currently no reason to switch to, or enable the cpu version, it will be significantly slower, even enabling both cpu and gpu at the same time will be slower than gpu alone. It will get faster with time, but nobody know by how much or how soon it will happen. It is mostly there to guarantee that every system can render with the new render engine and so that cpu render farms can take on these rendering tasks. If you have a vaguely modern computer and a redshift licence, there is no reason to use cpu mode at all. For those who have commented along the lines of "my gpu isnt very good, ill use the cpu version instead", even a cheap old geforce 1060 will likely be faster than any consumer cpu. In short, use the gpu version if you can, the cpu engine is just there for potato computers. -
1. The feature nobody expected, and nobody wants: Emoticon generator. Move a series of sliders to control joy, anger, nervousness and sanity to create a vector spline emoticon. 2. The feature they've sneakily removed without telling anyone: All right click menus, this was done to prevent touch screen users of c4d from having a different experience from mouse and tablet users. 3. New feature, but can't be used in production A new render engine has been written from the ground up, but it only works on Ti graphing calculators up to 3Mhz. 4. New way to screw over perpetual licence owners ZBrush has been integrated into C4D, but all brushes require a Maxon one subscription. Perpetual licence users are limited to a 1 pixel wide brush.
- 24 replies
-
3
-
A real glass half full post here 😉 What about "New feature, but it can't be used in production" or "New way to screw over perpetual licence owners"
- 24 replies
-
Nope, it is what it is. Your only hope would be to submit an idea on the maxon website asking them to increase the zbuffer, especially for wireframes, to 32bit. currently it is either 16 or 24bit, which isnt enough to draw them accurately.
-
I think hes not asking why the lines along the joints are missing, but rather why do the lines overshoot and cross over each other. The answer is the z-buffer. Everything in 3D space essentially has a depth assigned to it so that C4D knows which items to composite in front of, or behind others. This z buffer only has so much detail, when surfaces overlap with each other, there's always a little inaccuracy where they meet. This problem is worse for the wireframe which gets composited over the top of the shaded 3D view because its better to stay on the cautious side to make sure the wireframe can be seen, rather than risk it vanishing behind geometry where surfaces meet. This issue is exaggerated on scenes where model models meet at very shallow angles, or on shots where the camera is far away, but optically zoomed in.
-
Goddam I'm handsome
-
Wheres the diamond hands emoji?... Strictly speaking there's been like 10% inflation this year, I wouldnt be surprised to see that reflected in prices.
-
There's quite a few things which jump out to me: The sun angle of the background image shows the sun is just a few degrees off the horizon, this means there should be long streaking shadows running all the way out of shot. Instead they're short and dumpy as if its almost midday and the sun is above us. the light angle is simply wrong for the shot you're compositing into. There's too much fill light on the left. That shadow should be almost black with direct sunlight, instead it is very well lit which make the unlit not look planted onto the floor Its leaning to the right. Consider rotating the camera to the right so the object is centered, the using film offset X in the camera to pish it back. Your metal is just too clean. Sheet metal comes off a roller, there would be some brushing or anisotropy effects on the surface. Or fingerprints, or corrosion. Galvanised stainless steel for industrial outdoor use is never that perfect. The background image choice isnt great. The entire right hand side of the image is brighter than the left, so this just makes all the lighting issues even worse as it looks like there's a bright haze of fog/sunlight behind the product. Did a nuclear bomb go off behind the product? Maybe. Quick PS fix, but I cant fix the short shadows:
-
Nothing wrong with eyeballing it. You'll get the corners in place in 20 seconds within a few pixels. On an 8k render this will equate to being accurate within a tenth of a mm or so. The real life product will likely have a 1-2mm tolerance for label placement, the chances of the label visually being identified as not perfect is zero. Really there's no point messing around going through AE or using dead 3D tools for something like this when you can get a 100% passable image in a few seconds in PS. Ideally what you want is a plugin to distort a layer to a uv pass from c4d, and I am gobsmacked that such a thing doesn't exist. Such a tool would perfectly map and distort your label onto your 2D render, but it just isnt there.
-
What i do in these cases is render a version with a square plane on top of the tin. Load this into PS. Add my label as a new layer on top. Convert to smart object, then transform in distort mode and just pin the 4 corners to the plane. Then you can replace the render with the normal tin, render a mask for the label on top and now you have a PS file where you can drop in replacement labels as needed.
-
GPU Rendering and Falling GPU Prices (1 3090 vs 2 3080s)
Mash replied to Falstaff's topic in Discussions
It isnt a question of making them appear as one gpu to the os via sli/nvlink. gpu engines are happy to see and use multiple cards just the same as a cpu engine will split an image over 8 or 128 cpu cores. octane can use different cards, from different generations and will more or less use 95%+ of their power on average. I've tested this with a system containing a 3090, 2080, 2080s and 2080ti and it works fine. We've also tested on up to 10 quadros in a single system and at this stage youre getting the majority of the power from the gpus. By that point you're looking at about 90% efficiency. Redshift, things arent so shiny there unfortunately. 2 gpus is around 95% efficient which is fine, 3 gpus this is down to around 90% efficient, 4 gpus and youre down to about 75% efficiency, anything after this is pretty much a waste, once you go past 5 gpus you will find your render actually get slower. So, fine for workstations, but it makes building a RS render farm tricky, as a machine which should be able to be kitted out with a dozen gpus is basically useless. -
Keep in mind, the menus highlighting will happen for the most mundane things like renaming an object
-
25% more electricity usage, 25% more cost, for 7% more render speed,