At the John Deere booth at this year’s CES in the Las Vegas Convention Center, conventiongoers could do something incredible with an iPhone. They could pushed the PAUSE button on an iPhone and thirteen hundred miles away, in the middle of a field outside of Austin, Texas, a giant, bright green, driverless tractor stopped short. Hit RESUME and the tractor started up again. Put down the iPhone and the tractor resumed tilling the field, all by itself.
The breadth of what you can do with the tractor via the demo app was limited. You could stop and resume the tractor, as well as increasing or decreasing its speed in a straight line and while turning. There are no turning controls. But what this signals is huge.
In the demo, a farmer first geo-fences the field boundaries and then the tractor can determine its own path based on how wide the tiller is. Tillage is the only job the technology is programmed to handle but John Deere hopes to have a complete autonomous production system supporting every step of the farming process by 2030.
The John Deere spokespeople ballparked such a tractor between $600,000 to $700,000, with the autonomous technology implementation adding a further $100,000 on top of that. Older tractors from the 2020 model year and up can also likely be retrofitted with the tech. The update should “take only about a day” according to a 2022 CNET story.
There’s no doubt in my mind this is how the future of farming will look. It’s been coming for a long time; and spending long hours out in the field will almost certainly be a thing of the past very soon.
There are already calls that John Deere and other equipment manufacturers will have fully autonomous fleets that they manage and simply send to your fields on a subscription-like basis.
Determining what autonomous driving algorithms do in difficult life-and-death situations is a real problem. Until now, many have likened it to the famous ‘trolley problem‘.
There is a runaway trolley barreling down its tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them but you are standing in the train yard next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track. You have two (and only two) options:
Do nothing, in which case the trolley will kill the five people on the main track.
Pull the lever, diverting the trolley onto the side track where it will kill one person.
The problem asks which is the more ethical option? Or, more simply: What is the right thing to do?
Analysts have noted that the variations of these “Trolley problems” largely just highlight the difference between deontological and consequentialist ethical systems. Researchers, however, are finding that distinction isn’t actually that useful for determining what autonomous driving algorithms should do.
Instead, they note that drivers have to make many more realistic moral decisions every day. Should I drive over the speed limit? Should I run a red light? Should I pull over for an ambulance?
For example, if someone is driving 20 miles over the speed limit and runs a red light, then they may find themselves in a situation where they have to either swerve into traffic or get into a collision. There’s currently very little data in the literature on how we make moral judgments about the decisions drivers make in everyday situations.
Researchers developed a series of experiments designed to collect data on how humans make moral judgments about decisions that people make in low-stakes traffic situations, and from that developed the Agent Deed Consequence (ADC) model.
The approach is highly utilitarian. It side-steps complex ethical problems by simply collecting data on what average people would consider ethical or not. The early research for ADC claims the judgements of the average people and ethics experts very often match; even if they were not trained in ethics. This more utilitarian approach may be sufficient for some tasks, but inherently is at risk from larger issues ‘If everyone jumped off a bridge, would you?” It’s often referred to as the Bandwagon Fallacy. Decisions made by the masses is something even Socrates argued against in The Republic.
Daniel Coppen recently teamed up with media artist Tomo Kihara to develop “How (not) to get hit by a self-driving car,” a street-based game designed to improve people detection in autonomous vehicles by challenging people to avoid being recognized by an object-detection algorithm.
Participants use creative maneuvers like cartwheels and disguises to test and potentially enhance the AI’s ability to identify pedestrians in varied and unpredictable scenarios. The game’s creators hope to conduct a global tour to gather diverse data, aiming to share it with researchers and self-driving car developers for better training of these systems.
Beware is an in-development demo by Ondrej_Svadlena. At a glance, it’s an open-world driving game that first appeared in May 2018. In it, you are a driver in what appears to be a rainy, foggy eastern block country in the 70’s. What makes this thing stand out is the atmosphere of tension, disorientation, and paranoia it creates. It’s really fantastic. The player is dropped into anonymous, listless locations, hampered by dense fog and rain-slick backroads. The player encounters various solitary landmarks—as well as mysterious and menacing events.
It’s definitely worth checking out. His Patreon page has the latest information about development and supporters get access to extensive additional content. It seems he is up to version 13 and it appears to maybe even support VR now.
Old games often suffer from the limited graphics capabilities of the time they were made, while developing new games costs a fortune due to the requirements to author high quality models and textures. What if you could solve BOTH problems – with the same solution? A machine learning project from Intel Labs in 2021 called “Enhancing Photorealism Enhancement” might push rendering toward photorealism a lot quicker and easier.
Researchers studied how to use a convolution network to re-render the scene. Below you can see an example of how they used the CityScapes dataset to give a much more realistic output of a race game – all in realtime.
You can read how the image enhancement actually works in their paper (PDF). It includes a lot of good information about how their method works and how it improves on previous attempts that have issues with color, object hallucination, and temporal instability. They do this by using the extra information provided by rendered scenes such as clever use of the g-buffer – along with a specialized discriminator and segmentation network.
“It’s like a child’s backyard project”. The cool looking unibody design shakes and rattles, only goes about 12mph, has no heater/AC, isn’t practical, but there it is and it turns heads since it’s basically an art car.
Ever want to buy a $100,000+ Porsche, BMW, Audi, Mercedes, or other semi-super car in Portland? Or maybe you just want to look at how the richer half lives.
If so, you might look at Grand Prix Motors in Portland. They have tons of interesting expensive cars to browse through on their website. They actually have what appears to be decent prices and move a good quantity of inventory so it’s always fun to browse through things you might never afford or want to actually spend your money on.
They also have some pretty wild cars that randomly migrate through their consignment sales section too for additional spice.