By John Brownlee at 4:43 am Mon, Aug 18, 2008
Wow! That’s impressive.
Of course, with techniques like these, video is as utterly useless as proof for anything as photos have been for awhile..
This article now appearing on the BB mothership in 5…4…3…2…1
Alternatively, it will resurface in a couple of months. Hooray for eParkinsons!
The first video doesn’t look like it was shot in anything higher than 8bit, and you wouldn’t have yielded such a good color correction short of 10bit or even 12, so I think it’s genuine. Very impressive. And just as promising, especially for low budget film-makers.
That’s the first thought of TOMMY. This is really the end of video evidence.
The other thing I thought of right away was: they’ll make millions selling this to Hollywood.
Despite the standard aphorism, the camera always lies. The only question is whether the lie is telling the truth.
The results here are beautiful and could mean all sorts of things for the future of video, but what actually got me the most was the precision of their motion-based spatial mapping algorithms.
This degree of precision is a major step on the road to one day being able to easily “scan” an object, a room, or an entire building into a 3D model simply by pointing a video camera around in all the nooks and crannies.
Wow, never thought I would hear the phrase “Spacetime Fusion Algorithm” outside an episode o Star Trek, but DAMN that is cool.
Actually, I doubt that this will make much of a splash in Hollywood.
Couple reasons why – First, many of the problems they are fixing are only problems in low end consumer equipment. The colour depth, overexposure and similar issues only show up in cheap or mishandled equipment. Highlights blowing out areas of your film don’t happen to professional camera men. Hell, it doesn’t happen to first year film students who pay attention in class and check their zebra stripes before filming.
Second, their method of layering on information from photographs seems to kill the crispness of the shot. This one is a little harder to call because the video isn’t great quality but if you looks at the before/after comparison stuff the colour may be far better but the quality has taken a hit. Would not be acceptable for even HD television, let alone a movie.
Also, if you listen to the description of the whole process, the impressive stuff was filmed in stereo to generate all the necessary information. The types of productions that would have the technical knowledge to pull this off are also going to have enough experience to avoid 95% of these situations in the first place.
And finally, all of this is already possible. And while it may take longer, current methods also offer more control. This is the film equivalent of the auto levels stuff in Photoshop. A neat trick, but no professional would ever dream of using it if more nuanced tools were available.
These guys are obviously smart and have big futures ahead of them. But these particular technologies are neat tech demos, not game changing tools.
Jaw-dropping stuff. Looks promising, but it’s hard to tell how much tweaking was required and render time goes unmentioned. If this takes an hour a frame on a fast computer, it won’t be much use to most people.
I think the point of this software is not to improve Hollywood, but to give those of us who aren’t professional filmmakers tools to improve the videos we shoot with the equipment we have.
In short, this probably won’t make much of a splash in Hollywood, but it could make YouTube much prettier.
Except that it isn’t designed for amateur film makers. How many people posting to YouTube film their stuff in stereo or from multiple, carefully controlled, angles simultaneously?
This is neat stuff, but very much a solution looking for a problem. The real world chances of someone being technical enough to make use of it but not skilled enough to need it is vanishingly small.
It sounded to me like their stereo was being extracted from temporally offset frames?
I could put all these effects together using current-gen post-process software, such as Boujou for the point-cloud extrapolation, Combusion (or one of a half-dozen other comping packages) for colour-corrections, masking, tracking and stabilisation and LightWave or Maya to render it all out. But I wouldn’t be home in time for dinner that evening, I tells ya.
As to “changing Hollywood” – well, when it’s out as a plugin for existing compositing packages, sure. As in “people will suddenly alter their film-making practises”, not so much. But a lot of effects guys would scream a lot less about being asked to take that boom-shadow out of the shot. 😉
Nah…this isn’t a big deal. Hollywood and even low-budget indies have already discovered how to deal with lesser than great video. Just bloom everything.
Think about all the crap-quality camp movies that will never have even a proper dvd release let alone hd.
Get together like-minded movie geeks to take pictures of filming locations and actors and voila! 1080p version of Hindi Superman!
“Except that it isn’t designed for amateur film makers.”
Actually, the video clearly suggested it was for amateur filmmakers. Most of the presentation is about fixing problems with consumer-level video. Secondly, this is a technology from “Students at the University of Washington,” not the latest tech demo from ILM. This may never see be available for public use at all, and almost certainly will be much different if it ever does.
“How many people posting to YouTube film their stuff in stereo or from multiple, carefully controlled, angles simultaneously?”
I’m talking about this kind of YouTube video:
Not this kind:
Also, a lot of the most promising stuff in the presentation didn’t require manipulating more than one frame of existing video and no extra location shooting; e.g., the scar on the tree and the parking sign in front of the flowers.
Mail (will not be published) (required)
Submit a tip
The rules you agree to by using this website.
Who will be eaten first?
Jason Weisberger, Publisher
Ken Snider, Sysadmin