
eikonoklastes
Community Staff-
Posts
189 -
Joined
-
Days Won
18
Content Type
Profiles
Blogs
Forums
Gallery
Pipeline Tools
3D Wiki
Plugin List
Store
Downloads
Everything posted by eikonoklastes
-
You could print the PDF. Alternatively an e-Ink reader might be easier on your eyes.
-
Thank you. Yes, for the balls I used the Shape Match constraints, that uses sphere-packing to simulate rigid bodies with Vellum. There is a global velocity damping setting for Vellum, but you can also apply per-point drag and speed limit forces to any geometry in any sim in Houdini, not just Vellum. In fact, any force/attribute can be per-point or per-object - that's where Houdini is pretty much unmatched. If you wish to inspect the file, the free Apprentice version will open it without any changes. It will just downgrade and save the file to the non-commercial .hipnc format, that then cannot be opened in paid versions (without setting the Houdini session to non-commercial).
-
That would likely need a different, more rigid, surface for the balls to roll on, considering that with this cloth the leading balls create a "wake" that will be affecting the paths of the other balls. I'll try to put together a pure RBD sim sometime soon. The per-ball density and scaling (with the same emission position and velocity) is very doable though.
-
This is my version of the very nice animation done by @MJV in this thread. Some notes: - I deliberately exaggerated the cloth bending effects in this one to differentiate it from an RBD sim. I realise that this makes a departure from the objective of the original, but I quite like how this turned out. - The cloth is fairly high density, I have not subdivided it after the sim. It took 5 mins to sim 480 frames, which is pretty good imho. A pure RBD sim of this would be significantly faster. - I animated the velocity damping from very high at the start, to allow the big ball to settle into place quickly (and also why the start of the animation looks unrealistic), to a very low amount once the smaller balls start getting emitted, so that they wouldn't be bogged down by the damping. - There is a variable stiffness on the cloth - it's very stiff at the borders, to simulate the tautness of it being pegged to the ring, and falls off to not very stiff in the middle, to simulate the lack of support there. - Vellum usually does not allow object emissions at intervals (the built-in options are Start Frame, Each Frame, and Each Substep). This being Houdini, a quick expression allowed me to override that to emit the balls once every second instead. - I am including the scene file here, but fair warning - I have not annotated anything, so good luck. It also includes the Solaris setup. Marbles.hiplc
-
Why "Density" and "Mass" values in Dynamics instead of "Weight"?
eikonoklastes replied to MJV's topic in Discussions
Looking good. The motion blur looks a bit wonky to me though. The balls have a conical shape to them that seems inaccurate. What's curious is that the conical shape tapers to the right even if the balls are moving in opposite directions. Very nice. Looks very cool. You didn't really need a cloth sim for this at all though, right? It's essentially an RBD sim - the "cloth" could have been modeled? -
Why "Density" and "Mass" values in Dynamics instead of "Weight"?
eikonoklastes replied to MJV's topic in Discussions
Not quite. It uses Vellum Grains to pack the object with spheres and treat it as rigid. It's actually a pretty smart workaround to the problem, and scales well. Here is some more info: Shape matching (sidefx.com) -
Why "Density" and "Mass" values in Dynamics instead of "Weight"?
eikonoklastes replied to MJV's topic in Discussions
A Shape Match constraint will treat objects as (very close to) rigid in Vellum. houdini_Lkxk5yVVCb.mp4 -
I agree with you on Blender being set to potentially replace C4D, given that it's free and its toolset is growing pretty rapidly, but the ground reality is that there is a sizable chunk of solo users who — shall we say — "procure" C4D for "free" anyway. I don't see that changing any time soon. The main reason why there are so many Maya and Houdini postings is because they are the de facto apps for studios - not because of their toolsets (although they are both very good for what they do) but because of how well they scale (to farms) and integrate with existing pipelines, and how extensible they are. Blender is very good software, but it does not do those things in its current state (I'm sure that will change over time), and as such very few studios will want to use it on big productions. This landscape isn't set to change any time soon. There are no other major players with the kind of head start that Houdini and Maya have, and under Autodesk, Maya has been steadily falling behind as SideFX shores up deficiencies in Houdini in a big way. While Blender's progress has been amazing in the recent past, building these fundamental frameworks is no mean task. I don't doubt they'll do it, I just don't see it happening any time in the near future.
-
The C4D peeps are having a bunch of fun with this, so why not join the party? (Well, looking in from the outside, I guess . . .) If you Google "chladni pattern formula", you can pick up the formula from the first result, and just directly plonk it down into a Point Wrangle. Generate a couple of sliders, and Bob's your uncle. I also added a locator to alter the center point, for more variety. Scene file attached below. houdini_VkTEEjYWSH.mp4 Chladni Patterns.hiplc
-
It seems more broad, from what I understand from that excerpt. They (effectively) say "you cannot use the app to train any AI system in order to simulate or perform a function that is similar to any function in the app". So it's not just restricting you from generating images, it's restricting you from doing anything that is similar to any function in C4D. Which is a pretty remarkably broad stroke in itself. I wanted to say "ludicrously", but settled for "remarkably".
-
What are they worried about? That someone will use AI to create 3D tools that match or surpass those in C4D? Isn't that already what's happened, and without AI?
-
Another fun setup, from a C4D thread (Houdini scene file attached below): houdini_6Y6BwBfK7E.mp4 This scales really well, too. Here it is with 12 thousand arrows, playing in real time in the viewport, on my 7-year old machine. Not sure what that would ever be useful for, but still pretty cool to see in action (the FPS is displayed in the bottom right - green means real time): houdini_pk5I6C4dpk.mp4 This is the C4D thread: Arrow wave modeling and rendering translucent planes - Cinema 4D - Core 4D Community Arrow Waves.hiplc
-
Hi, I have attached my scene file in the original post, you are welcome to take a look at it. I do use a particle system to emit the circles to get a pulse emission, rather than a random emission. There are other ways to do this, but I just used whatever was the easiest for me to pull off. The rest of the setup is also very easy - the only complicated bit is the loop to get the variation on each copy. While complicated, it's actually easy to execute, it's just done in a rather unique Houdini way.
-
This is a turbo-charged version of the previous setup, inspired by a recent post in the C4D thread doing the same thing. Here's everything happening in this one: Circles are emitted and scale up in concentric fashion They distort as they grow, but only on the outer circles - the inner circles remain intact. They have a colour gradient from inner to outer They are duplicated and projected onto a sphere Each copy has its own noise variation - specifically each copy has a different noise offset and frequency applied. I can precisely define what these values are for each one. There is a mask (falloff) that drives both scale and colour of these copies. There is a scene file for this. I also just threw a noise on top of the whole thing, for shits and giggles, and added a basic render setup in Karma. houdini_SJEtYGNQHo.mp4 This is the post from the C4D thread: Concentric Rings on a Sphere Scene File.zip
-
You can add an extra 1 TB for $10 per month. I don't believe that that is limited to any amount, as long as you keep adding $10 for every TB. However, if your size requirements are that large, you'll want to look into their business plan offerings, which work out cheaper at larger capacities, and for more users.
-
I use OneDrive myself and am quite happy with it. I have the Office family sub, that gives each member 1 TB of storage, which is a pretty good deal. I haven't faced any issues with speed or sharing limits, it has all worked fairly well for me so far.
-
Here's a souped-up 3-ball version of the above. This isn't included in the scene file above; I'll leave that to you guys to figure out. There's obviously some heavy trickery afoot to keep them all on the plate though... Untitled_H.264.mp4
-
Cleaning up some files, and came across this fun one that I did a few months ago. Houdini very helpfully allows you to feedback a simulation's output back into that same simulation, which makes this possible. There is a noise force being applied to the ball that pushes it around, while the plate tries to keep it from falling off. There are no keyframes, it's entirely simulated. I have included the scene file below. It was inspired by a similar animation I saw on the Houdini Reddit, but I can't link that right now unfortunately as that subreddit has gone private. houdini_IrqnsPIect.mp4 Self-balancing Plate and Ball v2.hiplc
-
Emitting circles and distorting them as they expand outwards
eikonoklastes posted a topic in Houdini
Taking some inspiration from the C4D forum, and thought this'd be a nice, fun and simple exercise. There's mask control, with a ramp, and you can specify the colouring and distortion based either on the age of the particles, or the mask. I've included the scene file here. This is the C4D thread: Emit Circles and Distort Them as They Move Outwards.hiplc -
I'd like to add an extremely important factor here — that only SideFX seems to provide — and that is world-class customer support. We are so used to what Autodesk and Maxon and Adobe call "customer support", that what SideFX do here seems almost like fantasy, but is actually real. For starters, they put out daily builds that contain bug fixes and even new features on occasion. Anyone can download these daily builds (even the free Apprentice users). If you submit a showstopping, high-priority bug report, they will address it with urgency and put out a bug fix at astonishing speed. You can send them your scene file to analyse, and they will actually analyse it and get back to you with a surprisingly quick turnaround. Their customer support staff are well-trained and actually competent, and you will frequently get direct responses from the developers, even for minor reports. The forums are also frequented by developers, and by helpful industry veterans, so you will almost always get a useful, helpful response to even tricky questions. I am not a SideFX shill, and will criticise them where applicable, but I can tell you wholeheartedly - the support you get from them is on an entirely other level from everyone else out there, and that adds a tremendous amount of value to a production.
-
It's the opposite though, as they have, thus far, been extremely careful to use only their own licensed material for training, and have pledged to compensate contributors to their stock library (that is being used for training their AI models). As with all things Adobe, we'll have to see how that pledge actually plays out, but so far all indicators are that they want to play fair. I suppose we will have to define what "significant human intervention" means. There already exist many apps that can produce compelling imagery at the click of a button (that don't use any AI). This is a much broader topic, and likely exceeds the scope of a single forum post, but the last 40 years of hardware and software progress have made it easier and easier to produce better and better imagery, with less human intervention. Rendering is literally a process whereby you press a button to relinquish human control to a computer to produce the final output. Yes, there has to be initial prep before you press that button, but all that prep is basically translating a human thought into a state for the computer to output. My stance is that it's the same with AI — except that it just makes all that prep work less of a chore. Basically, a tool that removes more technical barriers between an idea and an output. Replacing a tedious, technical process isn't necessarily a bad thing in the pursuit of content creation. There is definitely purity and value in pure mechanical and technical prowess, but in a capitalist world, easier methods will always take general precedence, especially if they can reduce costs. For me, a "significant human intervention" needs only be the creative idea or direction that a human can come up with. Being able to translate that effortlessly onto a certain medium should not detract from that. I'm in the beta, yes, and have played with it a bit. It's promising, but extremely limited in what it can currently do. Each generation is random - any further tweaking will require traditional methods. It also struggles with people, especially non-Caucasians. It's also heavily censored. I couldn't make an "exploding text" effect, because the word "explode" is banned by them, so good luck trying to make a "cool guys don't look at explosions" poster. I think Adobe's approach of adapting Firefly tech into its core apps (like Generative Fill in Photoshop, and the vector recoulouring in Illustrator) is the correct approach. By making it a tool that can do a certain something, and be complemented by the existing toolset, it works a lot better for actual work, rather than being a self-contained quasi-random image generator that's only useful for generating concepts, and falls apart when you get any specific notes/feedback. I don't believe there are any generation limits. It's online only, but I imagine that the final product should be able to work offline, at least optionally. If it works offline (like it should) I wouldn't want to pay anything above the existing CC sub. If it's online, I can imagine it's fair for them to charge an extra amount, I have no idea how that might be structured though. Having said that, I would be miffed if it was online only. There's zero reason for them to do that, except be greedy (and this is Adobe, so I do actually expect that).
-
Left Angle Autograph thread - can we dump After Effects yet?
eikonoklastes replied to BoganTW's topic in Discussions
I haven't tried it yet, but it seems to have some decent USD support already. This can potentially be huge for doing some advanced post-work. I really need to get my hands on the trial and start poking around. -
Left Angle Autograph thread - can we dump After Effects yet?
eikonoklastes replied to BoganTW's topic in Discussions
Another big win in my books, straight out the gate, is it runs on Linux. I know Linux fanboys like to crow about how Linux performs so much better, but I am not a Linux fanboy - I don't really like the OS. But I have tried it out with Houdini, and it absolutely flies compared to Windows. It feels like I upgraded my system hardware, when all I have done is reboot into a different OS. Adobe, and Ae specifically, was the one thing that was keeping me from moving my work to Linux entirely, but with Autograph I can potentially finally do that. -
Left Angle Autograph thread - can we dump After Effects yet?
eikonoklastes replied to BoganTW's topic in Discussions
Contrarian opinion here, but I think the pricing is ok. It's not ideal, but it's not terrible either. $60 for a monthly sub is reasonable when even a low day-rate is several times that. I am at a stage with Adobe (and have been for a while), where I will happily pay double the price for a better experience. The thing that irks me most is the time-limited trial. That's not ideal at all for unvetted software, and for busy schedules. I don't have the time to cram a brand-new app's learning into 30 days. They need to remove the 30-day limit and offer an unlimited trial. I don't care if they put a massive watermark over the UI and clamp the output to 640x480, but you need to be able to thoroughly test something like this before jumping in. Houdini is the gold standard for trials. Apprentice is fully-functional and entirely free, including all updates. Just copy them. -
Left Angle Autograph thread - can we dump After Effects yet?
eikonoklastes replied to BoganTW's topic in Discussions
That's precisely why people want an alternative to After Effects, because it does pretty much only those things, but with a much worse user experience. The future does seem very bright, with amazing AI tools on the horizon, but in today's world, animators and compositors still need to grind out work keyframe by keyframe, layer by layer, node by node, and have been waiting for a tool that isn't as poor as After Effects currently is. Adobe have slept on their motion graphics monopoly with Ae for around 20 years now, mirroring Cinema 4D's laughable development pace, and very content with 3rd parties filling in the very large gaps for features that should exist in the app itself. Houdini and Blender becoming more accessible over the last couple of years seems to have finally woken the C4D devs up, but no competition exists for Ae. I'm hoping that Autograph is the one that can at least wake Adobe up to make Ae suck less, so that something as basic as exponential easing may finally exist out-of-the-box, in an app that has been a leader for over 30 years. Ae only just added (literally in the latest release) functionality to re-use a track matte without having to duplicate it. It's embarrassing, and they really need a bit of shaking up.