I’ve been really happy that the response to the visual identity of Little Nemo and the Guardians of Slumberland has been so positive. I put a lot of care and attention into each frame of animation that goes into the game, and I thought it would be nice to elaborate a little more on what my art pipeline looks like. It’s fairly elaborate so I’m going to break it down into at least two parts. So for today, here’s Part I.
I’ll start with the final image first just to set the stage, so here’s a screenshot of Nemo in the Gumdrop Gardens:
And I’ll zoom in so you can get a closer look at Nemo. The paper texture on Nemo comes from the shader, but all of the other details are drawn into the textures, such as the sprite outline and the cross hatching and shading.
But aside from that paper texture, one of the key features of the shader is the ability to do a “palette swap” for sprites. This is used to let Nemo swap PJs, and also comes in handy for other sprites as well which sometimes need to appear with different colors in different contexts. Here’s Nemo wearing the “Master of Dreams” PJs:
And since we’re utilizing Unity’s 2D lighting system, we can of course change the lighting on Nemo. You’ll see that the outline is actually emissive if we crank up our emissive global light. This is all very useful for setting up different lighting moods in different environments.
And then if I change the sprite’s material so that it’s using the default shader, you can see what the sprite actually looks like:
Yikes! If you’re wondering where all the hatching, shading, and outline information went, that’s all in the mask data texture, so it’s not visible here. Here in this primary texture, we have blocks of RGBCMY colors so that we can identify areas of color, and then we apply a gradient to each of those. And if you’re wondering what the mask data looks like:
Here red values represent that we’re coloring based on the RGBCMY value from the primary texture, and green represents the emission levels. (The blue channel is reserved for now but may be used for rim lighting later on).
So, to combine those two textures together into a fully rendered sprite, we first need the gradients that we’ll be mapping from. For that there is a gradients map ScriptableObject which is just a relatively simple object that allows us to define a gradient for each color block. Here’s the inspector panel for one:
And all of those gradient maps we have in the game get flattened down into a single relatively small Texture 2D Array, which is what the shader will use to do the lookups. Perhaps that sounds complicated, but essentially, we’re just using the red values in that mask texture above to figure out where on the gradient to sample, and the RGBCMY value in the other texture to determine which gradient to use.
Here’s a birds-eye view of the shadergraph I’ve built that does just that:
It looks a bit of a mess, but ultimately it’s just doing all of the things I’ve described above:
Of course I’ve glossed over the complexity of the shadergraph a bit, but you’ve got the broad strokes of it now. And I’m always happy to answer questions about this stuff if you’re curious to know more. It easiest to find me in the DIE SOFT Discord server, and we even have a dedicated Q&A channel.
I’ll be back with Part II soon (now can be found here) which will get more into the process of using the Procreate app on my iPad to actually draw and animate all these sprites. But in the meantime, please let me know in the comments if you enjoy this kind of technical, behind-the-scenes look at the game. Am I overwhelming you with detail? Would you rather hear about a different aspect of the game? Leave a comment with your feedback. Thank you!
- Dave