Vuntra City is a procedural VR city generator in Unreal Engine 5 developed by a single person over the last few years. I know, I know. Procedurally generated content has got some serious shortcomings. Too many games with procedural content are just thinly veiled programmer art designed to fill spaces rather than be part of the experience.
The author actually does a great job recognizing those traditional limitations and attempts to fix them. Probably the best observations they make is not from the technical side, but the aesthetics side.
It turns out they have made an excellent solution with just some good observations and shockingly simple engineering solutions. As an engineer, I see far, far too many projects over-complicate things that could be done much more simply. Simplicity is how you know you’re on the right track. Complexity leads to tears.
After 2 years of experimenting, they have a really interesting solution. Check out the VuntraCity youtube channel to see vidoes of how they experimented with different techniques and solutions. I particularly liked how they used a normal old treemap layout to break up boring city grid structures. Combining it with a caching and pooled allocation system is nothing new; but was a good little optimization.
In this case, it was the game “The Day Before”. It was the most hyped game in Steam history. The social media blitz by the strange founders Eduard and Aisen Gotovtsev was something out of a fairy tale. They were the hottest thing on gaming sites and took in millions of dollars from fans that ate up their claims and demo clips – even though experts were dubious from day one. As people started looking deeper, the story got stranger and stranger but the money and fans poured in. I wrote about how the whole thing seems like a scam. Sadly, it’s all come true.
KiraTV did epically good coverage of this strange pair and Fntastic studio that raised red flags from day one. But nobody seemed to care or heed the warnings. The two projected what I can only describe as a cult-like charisma. People forked over millions of dollars to a pretty much unknown and unproven pair with no track record. Their studio was equally strange – in which they seem to be grooming and manipulating young developers to work for them, apparently, for free.
As development went on and people expected updates on progress, the messaging from the developers became more and more strange. Industry vets asked questions and were given inconsistent and confusing answers; yet a very solid core of fans rabidly defended them despite all the experts calling for serious caution.
In the end, after 5 years of development, the game was released to terrible reviews, not delivering even a portion of the promised features at dramatically worse quality than all the demos showed. As people absorbed how bad the game was, Fntastic quick announced it was closing its doors because the game flopped. It was only on sale for 4 days before they announced the studio closure.
A few hours after the studio announced its closure, sales of The Day Before on Steam were halted. “The Day Before has failed financially, and we lack the funds to continue,” the studio said in a statement posted to Twitter. “All income received is being used to pay off debts to our partners.
Their response to countless gamers that were promised the moon and stars and paid for $40 early access? “Shit happens”
I smell a lawsuit. I HOPE there is a lawsuit. These creators clearly were mis-representing the game they were making, took people’s money, and then launched the game in some twisted attempt to show they didn’t just take the money and run.
What’s sad is that almost anyone could see this coming. The signs were all there. Yet, much like Bitcoin, it’s amazing how many people absolutely refused to believe the founders were psychological manipulators, ignored the continual warnings of industry experts, and that they were promising something that just could not be delivered the way they were making it (on the backs of naïve young developers they didn’t even appear to pay).
If you’re curious what one of the most hyped games in Steam history ended up looking like at launch, here’s the first 22 minutes:
More colors and select premium materials such as exteriors featuring leather, wood, and bamboo alongside traditional PC case materials like metal, glass, and plastic
Upward trends in computer cases:
aquarium-style cases with lots of glass
integrated display cases
high-end showcase chassis
designs with tasteful RGB lighting.
Downward trends in case design:
‘old school’ RGB
classic towers
pure workstations due to the host of attractive alternatives now available.
Back-plugin motherboards – motherboards that feature plugs on the back/bottom of the motherboard. Components like memory, cpu, M.2, and other components go on top, and all the cable clutter goes on the back.
Power supplies: quieter and silent passive cooled PSUs, delivering more watts for silent and SFF builds, offering more 12VHPWR connectors, and providing white versions of new and upcoming PSUs.
Sizes were also shrinking by demonstrating some truly tiny 1000w power supplies.
They are also offering quite 1100W, 1300W, 1600W, and 2000W supplies that come with heat pipes and passive cooling blocks to reduce noise.
12 year power supply warranties
Immersive experience devices like the Dyn X Dynamic Racing Experience and the Orb X Gaming Throne.
A lot of the social media hyperbole is being fueled by fear and uncertainty. Not that there isn’t a real problem with generative AI taking away people’s livelihoods or possible copyright violation; but it’s worth knowing what one is talking about before heading off with pitchforks and torches.
The Supreme Court laid out the difference first in Baker v. Selden, and re-emphasized it a century later in Mazer v. Stein. “Unlike a patent, a copyright gives no exclusive right to the art disclosed; protection is given only to the expression of the idea—not the idea itself.” In this way, each type of intellectual property right exists in different types of creations, which arise in a different ways, and have different requirements for protection. “[C]opyright protects originality rather than novelty or invention,” which is the domain of patents, said the Court in Mazer.
Indeed, what the Court made clear in Feist v. Rural, is that authorial works need to be original; that is, both created independently and “creative.” Other cases, such as Bleistein v. Donaldson, spoke of original expressions as “personal reaction upon nature,” where the author contributes “something recognizably his own,” per Alfred Bell.
So the question for copyright becomes ‘Is AI creative?’. This is a tough point because it’s not clear what creativity really is. However, that philosophical or neuroscientific point is not that important when it comes to law. What is important is the previous language used to describe what is protected.
The article author indicates the emerging legal arguments seem to indicate that the kind of ‘creativity’ that is covered by copyright relates to that of human activity. Neither the courts nor the US Copyright Office have so far found AI to be creative with respect to the wording of existing copyright law.
Whether that argument is valid/sticks is a whole other story. Law is fickle and can change. It also doesn’t touch on the question of fair use on publicly displayed images and the argument that AI might be just considered as using copyrighted work to learn techniques/make but making their own reactive/derivative works which is something that art students do and the whole point of going to art school.
Either way, we’re likely see the most important legal decision in a decades with profound repercussions for future generations.
It looks like the illegal slowdowns by ILWU that ended up destroying the Port of Portland and leaving it’s own members jobless when shipping companies cut all ties has finally reached a head – in the bankruptcy of ILWU.
The bankruptcy of the union was the final result of a decades-old litigation between the union and an affiliate of the International Container Terminal Services Inc. It started as, what a jury later determined, to be illegal tactics when workers for years caused operational disruptions at the Port of Portland.
The Port of Portland has succumb to the damage of the illegal ILWU slowdowns. After attempting to restart the port, it announced that it will again close all container operations effective Oct 1, 2024. The Port has announced they are closing all container operations after it had lost $13 million for each of the last 2 years – never recovering after the illegal Teamster union slowdowns that caused shippers to pull out of Portland and ultimately also bankrupt the Teamsters.
Techspot wrote up a simple introduction to being in game development as a career. I think it’s a reasonable intro article to anyone interested in getting into the field as it is today; but definitely doesn’t go very deep in career development, if this would be a good fit for your personality, or matching long-term career goals.
I think some of the comments at the bottom are pretty interesting though. 🙂
The case revolved around the generation of a pop idol image; not the use of copyrighted images in the training of a generative AI model that is the source of a current US lawsuit with artists.
The argument was the one that we’ve been hearing already: because it was a human being who wrote the relevant parameters for the AI model and ultimately selected the image in question, the final output is directly generated based on their intellectual input and “reflects the plaintiff’s personalized expression.”
LoRa (Long Range) radio uses little power and can communicate at up to three miles in urban areas and five miles or more in the open. Many drone operators now use a repeater, carried on another drone, to extend the reach
First-person view (FPV) drones are quickly becoming a key weapon in the Ukraine conflict. There is rapidly developing drone warfare involving thousands of drones every month. Both Russia and Ukraine have fielded jammers and drone guns firing radio waves to knock out drone communications.
Most recently a Russian group claims to have developed a ‘magic radio’ for FPVs which is highly resistant to jamming. A physicist with the handle DanielR evaluated the device minutely in a detailed Twitter thread. While the technology is not astounding, what is interesting is that the device uses cheap, off-the-shelf components.
After a first round in which the judge refused a few arguments, things have gotten tightened up a bit.
New artists – from photographers and game artists – have joined the lawsuit
New arguments have been added:
In an effort to expand what is copyrighted by artists, the complaint makes the claim that even non-copyrighted works may be automatically eligible for copyright protections if they include the artists’ “distinctive mark,” such as their signature, which many do contain.
AI companies that relied upon the widely-used LAION-400M and LAION-5B datasets — which do contain copyrighted works but only links to them and other metadata about them, and were made available for research purposes — would have had to download the actual images to train their models, thus made “unauthorized copies.” to train their models.
The suit claims that the very architecture of diffusion models themselves — in which an AI adds visual “noise” or additional pixels to an image in multiple steps, then tries to reverse the process to get close to the resulting initial image — is itself designed to come as close to possible to replicating the initial training material. The lawsuit cites several papers about diffusion models and claim are simply ‘reconstructing the (possibly copyrighted) training set’.
This third point is likely the actual meat of the suit; but they haven’t spelled it out quite as sufficiently as I think they should have. To me, the questions that are really the crux of the question are:
Do large-scale models work by generating novel output, or do they just copy and interpolate between individual training examples?
Whether training (using copyrighted art) is covered by fair use or qualifies as a copyright violation.
Determining what autonomous driving algorithms do in difficult life-and-death situations is a real problem. Until now, many have likened it to the famous ‘trolley problem‘.
There is a runaway trolley barreling down its tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them but you are standing in the train yard next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track. You have two (and only two) options:
Do nothing, in which case the trolley will kill the five people on the main track.
Pull the lever, diverting the trolley onto the side track where it will kill one person.
The problem asks which is the more ethical option? Or, more simply: What is the right thing to do?
Analysts have noted that the variations of these “Trolley problems” largely just highlight the difference between deontological and consequentialist ethical systems. Researchers, however, are finding that distinction isn’t actually that useful for determining what autonomous driving algorithms should do.
Instead, they note that drivers have to make many more realistic moral decisions every day. Should I drive over the speed limit? Should I run a red light? Should I pull over for an ambulance?
For example, if someone is driving 20 miles over the speed limit and runs a red light, then they may find themselves in a situation where they have to either swerve into traffic or get into a collision. There’s currently very little data in the literature on how we make moral judgments about the decisions drivers make in everyday situations.
Researchers developed a series of experiments designed to collect data on how humans make moral judgments about decisions that people make in low-stakes traffic situations, and from that developed the Agent Deed Consequence (ADC) model.
The approach is highly utilitarian. It side-steps complex ethical problems by simply collecting data on what average people would consider ethical or not. The early research for ADC claims the judgements of the average people and ethics experts very often match; even if they were not trained in ethics. This more utilitarian approach may be sufficient for some tasks, but inherently is at risk from larger issues ‘If everyone jumped off a bridge, would you?” It’s often referred to as the Bandwagon Fallacy. Decisions made by the masses is something even Socrates argued against in The Republic.