I’ve recently visited a non-technical filmmaker friend and looked at a bunch of his LG and Samsung 360 cameras. They are all made out of 2 hemispheres, with 1 sensor in each side.
Each has its own set of gimmicks and phone apps, but they way they work is either there’s an app that auto-stitches the 2 halves together once you download to your PC, either you have to do it manually in Premiere etc.
The new and “cutting edge” GoPro Fusion 360 camera works on the same principle and the resolution is not impressive (only 2 cameras, not 8k).
I’d like a 360 camera for when I go hiking, but to me this all looks very unsatisfactory. If I were to create a 360 camera, I would get as many smartphone sensors as possible (26 or 52 or more (maybe research a spherical eye of a fly style light capture thing with hundreds of sensors)), stick them to the smallest ball I can, and inside place an ASIC GPU that blends their exposure and automatically stitches the 360 texture and encodes the video.
Why would you actually need a lot of sensors? These cameras use fish-eye lenses that give the sensor greater than 180 degree coverage. The fish-eye lens does introduce distortions but they’re predictable distortions that can be removed with software. Basically I see little point to what you’re suggesting…
Also I don’t have any experience with LG or Samsung’s 360 cameras but having to wait until you download them to your PC to stitch the images together sounds ridiculous. The 360 camera for my Essential phone does that automatically on the phone and you can view them right there…I honestly thought that was the norm, as it’s the only 360 camera I’ve actually used.
Yeah some cameras stitch on the phone but the camera itself should automatically stitch as it’s receiving pixel information from the sensors instead of use my phone.
Two reasons why having only 2 sensors is bad:
Resolution/price. A 360 photo sphere is only high enough resolution to not look shit (in VR) if it’s 8K. If I take 26 pictures with my phone’s 20 MP sensor to make a photo sphere, I get much better quality than the 360 cameras. I say get a bucket of cheap sensors.
Smoothly stitching exposure changes. If on one side the sun is super bright, and the other is in shadow, you only have 2 sensors to interpolate the lighting information and fix other stitching issues. But if you had 100 sensors, you’d capture a much more accurate light field.
Resolution isn’t everything. You can get lot’s of megapixels on the cheap, but I’m dubious as to whether cheap cameras would improve image quality.
I know what an ASIC is
Video decoders are mass produced because a vast number of devices needs them - so they are cheap. In contrast, developing a custom chip for stitching panoramas would be far to expensive due to the low quantity produced. Still, I don’t see the need for an ASIC in the first place.
Any mobile GPU should be fast enough to do this in real time. And because GPUs are programmable the spare computation power could be used for effects/denoising/etc.
Good points. Though I bet a whole bunch of iphone 4 cameras would do great. If each sensor intersects 50% with the sensor to the right, and the one to the left, and up, and down, you’ll capture more light, as if you had a bigger sensor.
Yeah, the iPhone camera is great. I’ve noticed that cheap phones often use pretty bad cameras though, despite the other hardware being decent. This makes me think that good, small cameras are pretty expensive.