Maya Object Model
I have quite a lot of experience with Maya APIs, both Python v1/v2 and C++, and while they’re very flexible and allow you to create a lot of custom Maya stuff, nonetheless, one thing Maya was lacking for decades is a simple Object Model.
For comparison we can take almost any other DCC, and they will have it: Blender, Cinema4D, Houdini, Nuke, there’s always some kind of an object model encapsulating all the functionality for example for the nodes or viewport or other parts.
Nonetheless, Maya has PyMel, which is an attempt to create exactly the model I’m talking about: everything is an object, every node has it’s own object with specific functions (light nodes would have something related to light management, etc).
There’s just one problem: PyMel is horribly slow. It’s not always the case, but quite often. Often enough, that I cannot afford of waiting that long.
There’s another problem on top of that: Maya is very slow itself when it’s working with thousands of objects. Close to 20-40k objs interactive experience becomes…uhm…let’s say “not that interactive”.
Possible solition?
Let’s clear up few things before we start:
- I don’t need full functionality of Maya API in my nodes, limited set is enough for most operations (20/80 law), and the smaller the code base, the easier to maintain it. I’m not trying to write a “full-featured API as a product”: it’s a “framework to help doing things”.
- I don’t need full support of hundreds of Maya node types: limited set is enough (for example light nodes, but not groupIds). The fewer – the better.
- It’s possible to create a “generic node” or mixin, supporting required generic functionality for different types of objects: lights, cameras, meshes, transforms, so you don’t have to write a class for every of those hundreds of Maya node types (you still might wanna do that though if needed).
- MEL is out of question for that kind of things
PyMel is too slow
C++ API development is possible, but requires different level of efforts
Python API v1 is a direct copy of C++ API, so writing in it is not the best idea in the world as well
Python API v2 was chosen (with minimal help of MEL when it’s absolutely necessary or makes sense as a “lesser of evils”).
Solution was to create a new “Immediate Mode Maya Object Model” wrapping Maya’s Python API v2.0 in a hierarchy of classes.
Main features:
- Very lightweight, API in a single Python file, dependencies are minimal.
- MObject-based, so things like renaming nested hierarchies of bones is not a problem.
- Flexible objects classification system for object type detection, supporting custom object types: Arnold, V-Ray, Octane lights, cameras, shaders and the rest.
- Fast scene iterators to pick required types out of scene of sub-hierarchy(up to x10-x1000 faster than PyMel and comparable to native MEL).
- Tags system, allowing to set an arbitrary number of arbitrary tags on objects (even binary data if needed), record shading and rig assignment mappings, current pipeline step / task / asset.
- Support for special object class operations: Shaders, Lights and Cameras do support baking into world space or querying all the nodes that belong to the given one (to export a piece of Maya scene or shading network).
- Extensible generic Import/Export system to support Shotgun publishing app.
Any type of geometry or plug-in object, like Golaem, Ornatrix, Birfost outputs can be added easily both to API and Shotgun integrations. - Shaders reconnection mechanism and integration into the Shotgun loader.
- Set of generic animation nodes, allowing to generate, record and write out Maya animation channels, or, for example, Nuke .chan files, if needed.
API was created for a new and undisclosed (yet) pipeline project and was tested and demoed multiple times, including cloud deployment.
Parts are working just as expected: Meshes, Cameras, Lights baking and publishing to OBJ/FBX/Alembic/USD from within Shotgun UI with a single button. Shaders are going to Shader Bundles with path-neutral textures.
Then loading all of the assets with Shotgun Loader app and reconnecting all of them together using tags automatically (everything is happening under the hood and without user interaction).