Jump to content
Dear members and visitors, click here to subscribe for full access to community. This includes posting, plugin and asset downloads, free premium training courses and removal of Google ads. ×

mattie

Limited Member
  • Posts

    38
  • Joined

  • Last visited

Posts posted by mattie

  1. Is there any difference when selecting emit from polygon area or emit from polygons. I mean... isn't emitting from all polygons the same as emitting from polygon area? The only difference that I can imagine happens is that on the line that connects two polygons - no particles will be released... so maybe if your getting a blocky appearance when the particles emit using the emit from polygons selection then you can try polygon area? Any thoughts?

  2. alright everyone, so i figured it out... for everyone's future edification here's the soln and all those blender crazy pants people commenting in the link above send you on goosechases. 

     

    when using the latest hardware such as a 3090, it simply requires the latest cycles download... the kernel issue was happening because the cycles plugin folder didn't have the file it needed to run the realtimerenderer (ie kernel_sm_86.cubin.)

     

    You simply need to install the latest cycles experimental build to resolve it as it has the kernel_sm_86.cubin fix...

     

    I had clipped my wings when I chose to use the version cycles_4d_509 (i think it was) when I should of been using the latest because I was running 3090 which is cycles4d_541 build. 

     

    and fixed. 

     

    funny though, I had thought that I could simply copy the ext for the kernel here... https://archlinux.org/packages/community/x86_64/blender/ 

    and add the files it needed to the cycles plugin folder which actually could work in theory but there were lines missing in the code when I tried it. So i ended up reinstalling the latest plugin to cycles. Feeling like a silly pants in hindsight.

  3. note that the top line in my previous post included 3090 because I copied and pasted the sentence from my google search and I am running a 3090. the error message is simply cuda binary kernel for this graphics card compute capability (8.6) not found

     

  4. cuda binary kernel for this graphics card compute capability (8.6) not found 3090

     

    I have checked almost every other forum and the ones that I've seen on blender have not been helpful.
     

    I've checked these sources:

     

    CPU+GPU rendering issues with RTX-3090 and AMD 3970X - Support / Technical Support - Blender Artists Community

     

    but this is no help...

     

    I tried running the realtimerender preview in optix mode but when loading up kernels the error message showed:

     

    failed loading pre-complied cuda kernel lib/kernel_sm_86.cubin

     

    opening blender's app and checking their preferences under openCL - it says no compatible gpus found for path tracing cycles will render on the cpu.

     

    Any ideas are appreciated bc in essence I think that I am running renders without a gpu until I fix this.

  5. 17 hours ago, Cerbera said:

    That is not normal behaviour with a regular mouse, and indeed I can't replicate it in R23 using one. So I am presuming it must be something to do with the wacom or its settings within Cinema. What tablet options do you have active in preferences, and are you using window scaling at all ?

     

    CBR

     

     

    what is windows scaling? but hey, nice thought. I do have tablet area set but noticed for some reason that it defaulted back to full after I made the changes for some reason... once I'm finished with what I'm working on, I'll check and see if it corrects itself 🙂

     

    It is set to Touch devices with graphics tablet checked and hi-rers tablet checked. 

     

    psr cursor tools cursors and mouse move activation all unchecked.

  6. 14 hours ago, Smolak said:

    I'm using Wacom tablet and knife works fine. Do you have set Tablet as input device in preferences, AutoSnap enabled in knife options ?

    It is set to Touch devices with graphics tablet checked and hi-rers tablet checked. 

     

    psr cursor tools cursors and mouse move activation all unchecked.

  7. Using a wacom intuos pen...

     

    Modeling using line cut when I am moving from defining one line cut to the next the point between the orange click point and my mouse is different... Is this an error that I'm facing or is this the way it's supposed to run? (also the img below - my mouse is represented by the big black dot.

    dragissue.png

  8. whoa, I would totally wait before changing stuff like above... I mean, that just worries me. you don't want to start doing things to a rig that are software driven... especially when you have a single isolated issue of c4d not working properly.

     

    It would help if you provided the motherboard, power supply, ram, and cpu you have... 

    But rather then me look at it for you, I would just say to check your motherboard compatibility with the 3 gpu build and again against the power supply etc. 

     

    Desribe what happens when it crashes in more detail. Does it crash upon start up? or is it something that happens when you begin rendering?

     

    I would double check the app for nvidia and make sure everything is running correctly there.

  9. brilliant. thanks for clarifying all these things super helpful.

     

    off topic, I did figure out why I was getting the real time render preview error:

     

    I'm running a rog zenith ii extreme alpha and apparently I placed it in the wrong port because the configs were x16 x8 x8, x16 x16 x16 or x16 x8 x16 x8 for bandwidth to the cpu. Switched the pcie port to the x16 because I'm running a single gpu and it corrected the issue as well as increased performance. 

     

    🙂 

     

    anyhoo thanks again 

  10. curious, what was the verdict on the RAM as the culprit? I think RAM issues will restart the machine rather than power down. Power supply, gpu drivers all sounded like good avenues to me when others commented on this. Perhaps, maybe even a bad software install if I was running down the list but you said that you tried r20 and r23, it would be weird if install was the case on two separate occasions. Are there other high performance programs(likehobbyist mentioned) that you run on the computer where you can reproduce the issue? 

     

    I know that during a certain point renders can seriously upregulate their performance if the scene has simulated material or a large point count. If your temps were stable but then shuts down, perhaps it could still be the cpu auto protecting itself for high temps. I dont have experience in how quickly autotrigged cpu shutdowns respond so idk. But just trying to run through it how I might see it. good luck

  11. When rendering, is it optimal to have a cpu utilization rate of 100%?

     

     

     

    I notice that my cpu will upregulate 100% and then downregulate to around 10%. It continues in this way till the render is done. But, I noticed that having a custom thread count can increase the cpu utilization compared to leaving it unchecked. (just an observation) I thought that I read you could decrease the thread count in the interface preferences to help the machine save cpu for ancillary tasks while rendering. But, I noticed it does this *see attachment* to the cpu utilization. 

     

    I am trying to figure out if running a cpu at 100% is ideal. 

     

    If it is not ideal, what render settings would be recommended to put a threshold on max performance of rendering to allow for longevity of your cpu?

    cyclescpuquestion.png

  12. On 12/31/2020 at 4:37 PM, imashination said:

    Unless you mean the 64core chip? 

    no  I was looking for the server-esque epyc chip. I chose to go with the obsidian series 1000D and I'm planning on using the extra room to have a single server render farm in the machine. 

    On 12/31/2020 at 4:37 PM, imashination said:

     

    "Fractal design celsius+ s36 prisma"

    A decent cooler, but a 280mm would give almost the same results if you decide to pick a smaller case.

    I went with corsair 280mm h115i. Happy I did.

    On 12/31/2020 at 4:37 PM, imashination said:

     

    SLI doesnt mean two graphics cards, SLI is a technique for running games with multiple gpus and is 95% dead at this point. You can have multiple gpus without using SLI.

    off topic here... but do you know how cpus are to optimally perform. => does running a cpu at 100% mean you've selected the correct components to max out the cpu performance or is it better to taper down the thread count to lower cpu overall utilization?

     

    On 12/31/2020 at 4:37 PM, imashination said:

     

    "Corsair vengeance rgb pro 128 gb (8x16gb) ddr4-3200 cl 16"

    Personally I would go with their 3600mhz cl16 4x32gb kit. This leaves you the option of upgrading later if you need (though unlikely you would) It depends on your projects but realistically 32gb is enough for most people. 64 gives you lots og headroom. 128gb is a bit over the top but if you want to, sure. 256gb is just silly at this point.

     

    yea, i went with 4x32gb 3600. It's not even getting touched 

    On 12/31/2020 at 4:37 PM, imashination said:

     

    SD"

    Why two 1tb drives and not a single 2tb drive? A few years from now the 1tb drives will be far less useful than a drive twice the size.

     

    in terms of workflow. I have one dedicated for the programs and another for recent projects. And, a 5tb for long storage.

    On 12/31/2020 at 4:37 PM, imashination said:

    "if you use high poly count scenes then a quadro card is recommended... what poly count would meet the threshold for using a quadro card? AND why would one not just choose an RTX vs a quadro?"

    A quadro card won't help with anything unless you have deep pockets. Again, it depends what you're making but 10-12gb cards work fine for most stuff. 24gb geforce 3090 cards have 24gb and will last you quite a while. Quadro cards can go higher but with a price tag to more than match.

    I went with the 3090 despite some info about it not performing well... There is only one issue I have seen where it cannot render in real time preview with cyles... I've been using CPU to render real time but I would like to know why i get this kernel error when using the GPU as the renderer.

    On 12/31/2020 at 4:37 PM, imashination said:

     

    "i need a power supply that is like 2000 watt or something like this."

    No, you don't. A threadripper will use 150 watts, a 3090 will use 350. With a single cpu and a couple of 3090 cards, plus a few more watts for ram and storage, you will be using 900 watts are full tilt. Throw in some headroom and you'll be 100% fine with a 1200watt psu. Even a 1000 watt unit will likely be fine.

     

    ya. I went with evga suppernova t2 and I think it should cover me if I end up doubling up my GPU in the future. I went with the 1600 just so i dont need to worry about it.

  13. 35 minutes ago, 3D-Pangel said:

    I love hardware threads...if only because I love hardware.

     

    There is a lot of discussion around GPU but what has not been stated is whether you are going with C4D native renderers or a third party rendering solution.  I did pick-up on a mention of "cycle" but given the context I was not sure if this was a reference to Cycles 4D.

     

    Personally, I use Redshift and am loving the experience.  So my next machine will be wrapped around Redshift hardware requirements (excellent article found here).  Note in that article that the number of cores is NOT as important as the number of PCIe lanes between the CPU and GPU and processer speed.  This makes sense as Redshift offloads all rendering to the GPU and therefore you want as wide and as fast a bus as possible between the GPU and CPU to do that.   If your rendering software is NOT optimized for the GPU but is multi-threaded, then a large number of cores helps.  As for me, I am looking at the 4 core 3.6GHz Xeons.  Interestingly enough, Redshift hardware recommendations do NOT include the Ryzen thread-rippers or any AMD CPU.  Given how much cheaper they are than Intels, I would like to know if not listing AMD is just an oversight or if there is a stability issue.

     

    Relative to GPU's, there is always the old argument between Quadro's or prosumer or game cards.  Personally, I have been told that nVidia is more responsive to resolving driver issues between professional 3D applications and their professional 3D cards (Quadro's) but not so with the other cards.  Well, I did find an issue with Optix denoiser in Redshift (library failed to load) and the solution that Redshift offered was to install a driver version that does NOT apply to Quadro's....so not sure what to believe now and it is par for the course as it is 2020 after all.

     

    Dave

     

     

    Gotta appreciate the hardware. Definitely running with cycles. But, I have to say that redshift is a great step forward. Wrapping the next build around redshift is a great rig. Appreciate the article, I'll read it next. Good question about redshift hardware not recommending AMD. I'll post a reply in the future if I come across something.

  14. On 12/29/2020 at 7:01 AM, Cerbera said:

    it's all well and good having a super powerhouse of a personal machine, but are you really going to be rendering everything yourself ? Or are you going to do what nearly everyone else does and send big jobs to a render farm, where they have more processing power than you could ever hope to afford yourself ? Just a thought... 😉

     

    CBR

    So I have mixed feelings about the render farms. Maybe for two reasons - the first, you're uploading the project content to it which depending on clients may be an issue. For some projects, I think it is good to have it but not all. Also, certain programs when up-to-date versions are used have difficulty with using the plugin to whichever farm you choose. 

×
×
  • Create New...