Browsed by
Category: Art+Design

Backrooms videos

Backrooms videos

The Backrooms is an internet creepypasta that got started in 2019. A simple post and picture on 4chan caught fire. It became stories, then a video game, then lots of video games. Then, it became live short videos that use lots of VCR like 80’s video quality effects.

Andy R Animations shows how he used free versions of Blender and Davinci to create some of the higher quality Backrooms videos. Definitely worth a look at the amazing things people can generate now using free tools.

Largest religious painting ever: The Crucifixion

Largest religious painting ever: The Crucifixion

At Forest Lawn Cemetery in Glendale California there is a largely unknown gem. The largest religious painting ever made – a staggering 195 feet long and 45 feet high. It’s so large it has it’s own auditorium style seating.

The story behind it is almost equally amazing. It was commissioned in 1894 and painted by Polish painter Jan Styka. To make the painting, he traveled to Jerusalem to prepare sketches and even had his palette blessed by Pope Leo XIII. The gigantic mural was unveiled in Warsaw in 1897, traveled many European cities, then joined the 1904 St Louis Exposition. It was seized when the partners failed to pay custom taxes and was considered lost for 40 years. It was found in 1944 rolled around a telephone pole and badly damaged in the basement of the Chicago Civic Opera Company. It was restored and then displayed in Forest Lawn by American businessman Hubert Eaton.

It’s hard to find detailed images of the massive painting, but the pieces I have seen are really astounding. Read more about it here.

Also check out some of Jan Styka’s other paintings such as St Peter preaching the gospel in the catacombs.

RealityScan

RealityScan

Epic games worked with Quixel and just release the photogrammetry app RealityScan which turns smartphone photos into high-fidelity 3D models. The app is called RealityScan and is a paired down version of the desktop version of RealityCapture. Both combine a set of 2D images to make 3D assets. The idea is to enable game developers and other creatives to scan real-world objects at any time and any place for their projects (or the MetaVerse if that becomes a thing).

Engadget tried the iPhone app out and shows us how it works.

The scanning process begins with signing into your Epic Games account and taking at least 20 photos of an object from all sides. As you move your phone around, a real-time quality map shows how well you’ve covered it: green denotes well-covered areas, yellow could use more attention and red needs the most additional photos.

The app uploads and automatically aligns the images in the cloud as you take the photos. You can preview the model through the camera view and switch between the quality map and an in-progress, full-color render. When you want to crop it, it pops up 3D handles for you to drag around, ensuring it captures only the item, not the floor beneath it or background objects.

The process works best with simple items captured in even, indirect lighting (reflective or wet surfaces won’t capture well). It also appears to work best with larger objects, as my attempt to capture a small Mr. T action figure resulted in something that looks more like a pointillistic painting than a usable model.

The iPhone App version hasn’t got very good reviews (2.2 stars) – but it’s a start.

This idea isn’t new. There has been research and experiments in this space since the early 2000’s; but it’s an interesting attempt even if it seems to have a lot of growing pains to work out.

Discussion before making a modern period romance

Discussion before making a modern period romance

Karolina Żebrowska knows a ton about historical clothes. Sadly, she has to put up with largely ignorant modern period showmakers who make incorrect accusations about sexism of women’s fashions from the past. Here she re-enacts what happens when writers/self-styled designers try to bully experts like Karolina by (as one person put it) ‘basically wanting to produce a slightly altered fanfic they wrote when they were 13.’

Definitely check out her other videos on how modern sensibilities, including many modern gender commentators, actually get what was going on in the past completely wrong. But can you blame them? After all, almost none of those gender and similar degrees actually study the actual history, design, or the societies they are denouncing. They just study the criticisms of them.

Portland Winter Light Festival

Portland Winter Light Festival

The Portland Winter Light festival has been going on for 8 years now. I love going to visit the amazing artistic light creations people create – as well some quality people watching of folks that dress up in their own light costume creations.

While it wasn’t quite as amazing as it has been in years past, there were noticeably fewer new displays, and crowds were dramatically down, it was still a lot of fun to enjoy.

CETI collective Constellation displaymore pictures here.

Anadol’s data projections

Anadol’s data projections

Refik Anadol makes projection mapping and LED screen art. His unique approach, however, is embracing massive data sets churned through various AI algorithms as his visualization source.

I think one of his unique additions to the space is visualizing the latent space generated during machine learning stages.

Some of his projects:

Installing Stable Diffusion 2.0/2.1

Installing Stable Diffusion 2.0/2.1

Stable Diffusion 2.0 was largely seen as a dud. Past version 1.5 you should be aware that the outcry of various artists against having their works sampled resulted in the 2.x branches trying to use less of these public sources. This means it has a more limited training set and likely more limited output variety.

If you are interested in trying Stable Diffusion 2.1, use this tutorial to installing and use 2.1 models in AUTOMATIC1111 GUI, so you can make your judgement by using it.

Here are 2 different Stable Diffusion 2.1 tutorials:

You might also try this tutorial by TingTing

Retro games with modern graphics – using AI

Retro games with modern graphics – using AI

We’re already seeing a real revolutions in retro gaming via emulation. Preservation of old hardware is important, but it’s also seen as almost impossible task as devices mass produced to only last 5-10 years in the consumer market reach decades of age. Failure rates will eventually reach 100% over enough time (unless people re-create the hardware). But with modern emulators, you can still play all the different games on modern hardware.

On a separate development note, we’ve also seen graphics effects like anti-aliasing and upscaling get the AI treatment. Instead of hand-coded anti-aliasing kernels, they can be generated automatically by AI and the results are now included in all major hardware vendors.

But what about the very graphics content itself? Retro game art has it’s own charm, but what if we gave it the AI treatment too?

Jay Alammar wanted to see what he could achieve by pumping in some retro game graphics from the MSX game Nemesis 2 (Gradius) into Stable Diffusion, Dall-E, and Midjourney art generators. He presents a lot of interesting experiments and conclusions. He used various features like in-painting, out-painting, Dream Studio and all kinds of other ideas to see what he could come up with.

The hand-picked results were pretty great:

He even went so far as to convert the original opening sequence to use the new opening graphics here:

I think this opens up a whole new idea. What if you replaced the entire game graphics elements with updated AI graphics? The results would essentially just become a themed re-skinning with no gameplay (or even level changes), but this definitely brings up the idea of starting your re-theming for new levels (fire levels, ice levels, space levels, etc) by auto-generating the graphics.

Then it brings up the non-art idea of re-theming the gameplay itself – possibly using AI generated movement or gameplay rules. Friction, gravity, jump height, etc – could all be given different models (Mario style physics, Super Meat Boy physics, slidy ice-level physics) and then let the AI come up with the gravity, bounce, jump parameters.

Interesting times…

Links:

Shrinking 4 years to 4 days with AI generated music video

Shrinking 4 years to 4 days with AI generated music video

Photographer and filmmaker Nicholas Kouros spent “hundreds of hours” over 4 years creating a stop-motion meme-themed music video using paper prints and cutouts for a song called Ruined by the metal band Blame Kandinsky. He then created a new version using AI – in 4 days.

The work on the original physical shoot was intense:

“Cutting out all individual pieces was a serious task. Some of the setups were so labor-intensive, I had friends over for days to help out,” says Kouros.

“Every piece was then assembled using various methods, such as connecting through rivets and hinges. We shot everything at 12fps using Dragonframe on a DIY rostrum setup with a mirrorless Sony a7S II and a Zeiss ZE f/2 50mm Macro-Planar lens.”

In a move that likely avoided copyright issues, he used freely usable images. “Most of Ruined was made using public domain paintings and art found on museum websites like Rijks or the Met

After everything had been shot, the RAW image sequences were imported to After Effects and later graded in DaVinci Resolve.

Using AI instead

Kouros then created a second music video but this time he used AI. The video took a fraction of the time to make. “In direct contrast with my previous work for the same band, Vague by Blame Kandinsky, it took a little over four days of experimenting, used a single line of AI text prompting, and 20 hours of rendering,”

“The text prompt line used was: ‘Occult Ritual, Rosemary’s Baby Scream, Flemish renaissance, painting by Robert Crumb, Death.’”

Kouros describes his experience with AI as “fun” and was impressed with the results that the image synthesizer gave him.

What was his final take?

“In my opinion, this specific style of animation won’t stand the test of time, but it will probably be a reminder of times before this AI thing really took off.

I embrace new tech as it comes along and I have already started making images with the aid of image generators.
I’ve actually learned more about art history in this last year using AI, than in seven years of art schools.”

Links: