Now Reading
Dimension Hopper Half 1 – by Benjie Holson

Dimension Hopper Half 1 – by Benjie Holson

2023-06-12 10:31:55

The subsequent few posts are going to depart a bit from the common theme (which has been classes realized in a profession doing common goal robotics). As a substitute I’ve determined to make use of a few of my new free time

to study by doing and play with a few of the cool new ML all of the cool children are speaking about. 

My mission is to make a 2D platformer the place the gamers can design their very own ranges after which generative AI will create lovely rendered photos to characterize the degrees. We’ll skip to the top and you’ll see what it seems to be like now:

And listed here are a few of the totally different themes, although you may also create your individual.

You may play with it right here: dimensionhopper.com

I like to recommend taking a look at random ranges or the gallery and seeing what’s on the market.

However lets discuss in regards to the course of to get there.

I’d performed a bit bit with Secure Diffusion earlier than and it’s a actually enjoyable toy to make cool footage, however I all the time felt like I didn’t have fairly sufficient management. For me the enjoyment of creation comes from the interplay between what I do and what I get, and I needed to have extra enter. That’s why I used to be so excited once I examine control-net, which supplies a ton extra knobs to regulate the output. I instantly needed to make use of it to make a 2D recreation.

I put in Secure-Diffusion on my laptop computer, fired up the webui

acquired management internet working and fed this depth picture into Secure Diffusion:

This has the platforms because the closest pixels (white) and the black background signifies that half is much away. I used to be utilizing a pixel-art mannequin that had this wonderful demo pic within the CIVAI web page:

So I copy the immediate and settings from that, tweaked a bit and hit generate…….. And get this:

Not fairly what I hoped for… I attempt to fail to get the depth mode working for some time.

And swap over to “scribble” mode for control-net. Scribble mode takes outlines of shapes and lets them information the photographs (as a substitute of depth).

Extra attention-grabbing however nonetheless not good. 

Altering the immediate: 

“pixelart online game setting, platformer stage. Create a picture of a mystic stone temple within the jungle. shafts of daylight, dramatic lighting. Vines and cracks in brown stone”

A lot nearer! I’ve an image. It form of interacted with the extent. Nonetheless not good however higher. 

Ooh, I’ve the extent casting some shadows now.

“pixelart online game setting, Create a picture of an deserted house station, with damaged methods, flickering lights, and a way of hazard. Present the wreckage, the deserted rooms, and the unknown threats that linger.”

Ultimately with some extra taking part in round with settings I get ranges like this beautiful reliably:

This can be a huge enchancment over the beginning, however (1) it didn’t actually appear to be the extent was a part of the artwork, it was, at finest, pasted over, and (2) the extent textures appeared like a repeating tileset. Human videogame builders do that in order that they don’t have to attract a special little bit of grass for each sq. of platform, however I didn’t have that constraint. Huh.

I made a decision that a part of my drawback was utilizing a mannequin educated on pixelart. It was faithfully copying the style of repeated tileset, which was precisely what I didn’t need. So I modified to a different mannequin, this one constructed round kids’s illustration, and my first picture out appeared like this:

Wow! So significantly better! The platforms have shadows on them, have objects in entrance and behind and it is really a pleasant image. I’m on to one thing!

The brand new mannequin made good footage however I rapidly notice I’m strolling the road between two failure modes. 

Both I’ve a pleasant image with the extent form of principally drawn on prime of it:

Or I get a pleasant, built-in image the place its actually unclear the place you might be allowed to face:

The final one is particularly problematic as a result of the ‘stage’ half you may stand on has been rendered as a window, precisely inverting the semantics. This can be a basic drawback with utilizing ‘scribble-mode’ as I’m giving Secure Diffusion no approach to know what’s shut and what’s far, simply outlined shapes. 

I’m going again to depth, however have a thought: what if I trace that the highest of the platform has a bit lip. Quite than code it in my recreation rendering engine I simply draw them in Gimp.

Glad little platform toppers

Holy shit it really works!

And extra importantly: it really works just about each time. Most Secure-Diffusion workflows embody producing 4-10x extra photos than you really need, and selecting the nice one. For my concept to work we wanted all the ranges to be playable (you may inform the place the platforms are) and most of them to be good (lovely illustration) as a result of there wouldn’t be a human curation step.

I’ve realized that the look of the depth picture actually modifications the standard of the output. I used to be engaged on jungle ruins and saved getting photos like this:

(Facet notice: its wonderful how rapidly requirements rise. Early on that picture blew me away, however now it seems to be meh). 

The issue

See Also

is that there aren’t actually any cheap footage of jungles with that depth map. Jungles (and actually every little thing) doesn’t appear to be that. Stuff doesn’t float within the air, it has stuff underneath it holding it up. This results in breakthrough 2: add helps.

Every platform block initiatives a darkish grey field to the ground beneath it and that offers construction to the world. The darkish grey has no gameplay goal, it simply acts as a touch to Secure Diffusion about what the image is of. And we get a lot higher photos.

Issues had been fairly good, however I used to be nonetheless having bother with caves, and I puzzled if it was as a result of it was making an attempt to match the straight, sharp edges of the depth map and having a tough time making it look natural, so I added adjustable roughness to the photographs (in addition to adjustable background depth in order that sky might be distant for out of doors scenes and nearer for indoor/jungle scenes.

Discover the bottoms of the platforms are actually straight and its invented a bunch of lightcolumns / waterfalls that may have completely straight verticals. This one is definitely fairly good, however the sq. corners aren’t ideally suited.

Now every little thing is believably subterranean, and the underlighting on the bumpy rocks seems to be proper.

Secure diffusion doesn’t create any transparency.

However can cheat with the extent picture as a result of I’ve the depth data I fed in and masks primarily based on that, so characters can go ‘behind’ the platforms when wanted. For the gems, if I ask for ‘<blue gem/ruby jewel> floating videogame object on a black background’ after which subtract the background in python.

For characters I discovered this model which was actually fine-tuned to heck to make 4 body stroll animations. The creator actually needed left/proper/up/down animations so combines this mannequin with a lora of whoever the topic is to create all 4 instructions depicting the identical character. Consequently, I believe it’s educated so all of the ‘stroll proper’ simply get the immediate “PixelartRSS”.

I’d like to have the ability to immediate one thing in regards to the character I need, and this generally kinda works

however not almost as reliably as I’d like. My suspicion is that if I had the coaching knowledge, restricted it to only the sideways stroll and labeled it with precise descriptions of the characters it could work higher for me. Nevertheless it does reliably make people who stroll. So so long as you aren’t choosy, you may get new participant character sprites all day lengthy.

From there I simply needed to make it work in my very own app. I used the wonderful diffusers library to wrap the era and make my little server. All the pieces is shifting so quick that I’m certain I’ve finished some issues in very foolish methods in my image_generation.py and there are in all probability 2-3x speedups to be gained be configuring it proper, however for now all of it works, and it’s enjoyable to play with. Why are you continue to studying? Go test it out!

dimensionhopper.com

And subscribe for extra in regards to the growth of this recreation, and extra robotics content material after that.

Share

Source Link

What's Your Reaction?
Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0
View Comments (0)

Leave a Reply

Your email address will not be published.

2022 Blinking Robots.
WordPress by Doejo

Scroll To Top