Jump to content

DanHinton

Limited Member
  • Posts

    79
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by DanHinton

  1. I can highly recommend mocha pro or syntheyes, both of which come with a suite of tools to make your life easier when tracking.
  2. you may be able to get away with removing temperature altogether and just using density. I would also suggest upping the dissipation for density so you don't have an enormous plume of steam. You can also dial a lot of the look in in comp, so don't be afraid to render "denser/bigger" than you need and play with the opacity after the fact.
  3. a sketch and toon material, which you can use to render splines without needing additional geometry
  4. if you change the hair type to splines, can you then cache the whole lot out as an alembic then render the outer tentacles with a S&T mat? EDIT: have you tried caching the hair too?
  5. This package looks useful and would be a nice simple way to get "troublesome" geo into Unity. https://assetstore.unity.com/packages/tools/modeling/mega-cache-26522
  6. Your best shot is to report the bugs on the RS forums. Make sure to provide a log, steps to reproduce the crash, and if possible a cut down version of the scene.
  7. I'm aware this isn't necessarily useful for _you_, but here's my 2p: the shortest path script works on the inside of a piece of temp geometry, in this case a spline/sweep which is in inside a volume object. Pretty handy for making things like veins, or tree trunks/branches.
  8. In the timeline you can show/hide keyframes based on takes. Bear in mind you won't be able to actually edit them unless you drag and drop the specific object you want to edit into the take you want to edit it in. EDIT - here's the menu
  9. your materials in the screenshot have the alpha channel enabled. If you uncheck that channel, you should be able to remove the xray effect.
  10. You won't be able to export visibility from C4D. Use scale set to absolute, uniform, -1 instead. That should do the trick. You'll probably also want to get rid of any falloff on the field so that stuff within it is immediately "removed".
  11. yeah noticing that behaviour here too. Will post a(nother!) bug report.
  12. I suppose the question here is: do you need to simulate this? the geometry and setup looks simple enough that you could just rig this up with either a pose morph or some deformers.
  13. Yes. That's the "powerslider" btw, in case you're looking for it in the manual. To select keys, you click and drag the area directly underneath the frame numbers. EDIT: too slow!
  14. What sort of images are you making? Product/architecture/vfx etc What pixel size/ final image size? What optimisation are you already undertaking? Retopo/LOD/rigs etc What hardware? What renderer?
  15. If the blend shapes are Apple ARKit then I have a couple of Xpresso rigs which can be used to retarget and mix takes. Let me know if you're interested. In future I am looking to develop these further so that specific face regions can be blended across takes, which should hopefully make things like adding blinks, smiles, twitches etc a lot easier.
  16. Game engines and alembics are funny things. What works on re-import to C4D may not work at all when going to UE5. I've been battling this issue for the last few months with Unity. What I discovered was that there is a bit of a mindset shift required when going from linear to realtime. Are you exporting animation that starts from an "empty" scene? i.e. transform scale, or a sweep object with an animated start/end point? Game engines don't like importing content which has effectively nothing in it at the start, so add something on frame 0 even if it's just a copy of your final keys. Are you exporting your alembics via the file>export menu? If so, I would recommend using "bake as alembic" from the object manager instead, as this seems to work much more reliably.
  17. Late to this, but you can always edit the fbx after it's been through mixamo if stuff doesn't work, including rigging, weighting, blendshapes, materials etc etc
  18. the Katana looks great. Have you checked out the screen? If you can stretch to it, I really recommend going to 1440p if possible, and make sure you check the RGB coverage, particularly for motion graphics. You want something fairly accurate. The Asus TUF line includes some great 100% DCI-P3 coverage screens which are not too pricey.
  19. You've cached the sim out to VDB, right? I can't think of any other reason why that wouldn't work.
  20. Two things here: 1. I think you're misunderstanding how blending modes work. In the "add" blend mode for example, the rgb values of the pixels are added together. This has the effect of making the affected areas lighter. I would suggest reading the after effects manual for further information on blend modes; the C4d manual also has a comprehensive breakdown. 2. you're probably overcomplicating the comp setup. Unless it's important to be able to affect the background layer totally independantly from the foreground, there's no need to render each object as a seperate pass. You can render a single beauty pass, togeher with mattes for each object in order to isolate them. In standard render this is done using the object buffer and compositing/render tag. Using Redshift this is done with puzzle matte AOVs and a Redshift object tag. Moving from "getting it all done in render" and assembling passes in post is quite a huge step, so I recommend reading around the theory alongside experimentation.
  21. If I were you, I'd consider adding a couple of SSD cache drives to that build. I'd also upgrade the OS drive to 1TB, given the prices are low at the moment. FInally, I'd also double up to 128GB RAM if at all possible.
  22. Bake them before they go to the farm - bake the targets of the tags, not the tag itself.
  23. The GSG plugin is the issue. I don't have this plugin and the scene renders with alpha. You may need to mess with some of the parameters on the GSG objects in order to render the scene with an alpha channel.
  24. In the video you refernced, the c4d motion capture data already has a character definition tag applied. I suspect that this has already been setup in a t-pose before being added to the content browser. I really would recommend trying out my suggestion, it's relatively painless to do, and once you get the hang of the non-linear animation system working with mocap becomes much more enjoyable.
  25. Your walkcycle needs to include at least 1 frame of t-pose at the start, or the character definition tag won't bind correctly. You can either key the bones manually or use the nonlinear animation system to chop up the animation. Ensure that when you apply the character definition tag both animations are in t-pose and you should be fine.
×
×
  • Create New...

Copyright Core 4D © 2024 Powered by Invision Community