360 camera technology thoughts

I’ve recently visited a non-technical filmmaker friend and looked at a bunch of his LG and Samsung 360 cameras. They are all made out of 2 hemispheres, with 1 sensor in each side.

Each has its own set of gimmicks and phone apps, but they way they work is either there’s an app that auto-stitches the 2 halves together once you download to your PC, either you have to do it manually in Premiere etc.

The new and “cutting edge” GoPro Fusion 360 camera works on the same principle and the resolution is not impressive (only 2 cameras, not 8k).

I’d like a 360 camera for when I go hiking, but to me this all looks very unsatisfactory. If I were to create a 360 camera, I would get as many smartphone sensors as possible (26 or 52 or more (maybe research a spherical eye of a fly style light capture thing with hundreds of sensors)), stick them to the smallest ball I can, and inside place an ASIC GPU that blends their exposure and automatically stitches the 360 texture and encodes the video.

Why is it that the industry has moved away from this idea: https://benchmarkreviews.com/wp-content/uploads/2015/07/Panono-Explorer-Edition-Panoramic-Ball-Camera-Unveiled.jpg ?

Sensors are expensive, ASICs ludicrously so. With that said an ASIC would be overkill anyway, I don’t see any reason this couldn’t be done in real time with a GPU.

Why would you actually need a lot of sensors? These cameras use fish-eye lenses that give the sensor greater than 180 degree coverage. The fish-eye lens does introduce distortions but they’re predictable distortions that can be removed with software. Basically I see little point to what you’re suggesting…

Also I don’t have any experience with LG or Samsung’s 360 cameras but having to wait until you download them to your PC to stitch the images together sounds ridiculous. The 360 camera for my Essential phone does that automatically on the phone and you can view them right there…I honestly thought that was the norm, as it’s the only 360 camera I’ve actually used.

By ASIC I meant how smartphone GPUs have embedded h.265 decoders which make them faster than my laptop at that.

And my impression is that if you bought a bucket of 5 megapixel phone front facing cameras from china for cheap, they’d amount to like 8k of rich light information and a lot of redundancy, no? :slight_smile:

Yeah some cameras stitch on the phone but the camera itself should automatically stitch as it’s receiving pixel information from the sensors instead of use my phone.

Two reasons why having only 2 sensors is bad:

  • Resolution/price. A 360 photo sphere is only high enough resolution to not look shit (in VR) if it’s 8K. If I take 26 pictures with my phone’s 20 MP sensor to make a photo sphere, I get much better quality than the 360 cameras. I say get a bucket of cheap sensors.

  • Smoothly stitching exposure changes. If on one side the sun is super bright, and the other is in shadow, you only have 2 sensors to interpolate the lighting information and fix other stitching issues. But if you had 100 sensors, you’d capture a much more accurate light field.

Resolution isn’t everything. You can get lot’s of megapixels on the cheap, but I’m dubious as to whether cheap cameras would improve image quality.

I know what an ASIC is :slight_smile:
Video decoders are mass produced because a vast number of devices needs them - so they are cheap. In contrast, developing a custom chip for stitching panoramas would be far to expensive due to the low quantity produced. Still, I don’t see the need for an ASIC in the first place.
Any mobile GPU should be fast enough to do this in real time. And because GPUs are programmable the spare computation power could be used for effects/denoising/etc.

Good points. Though I bet a whole bunch of iphone 4 cameras would do great. If each sensor intersects 50% with the sensor to the right, and the one to the left, and up, and down, you’ll capture more light, as if you had a bigger sensor.

Yeah, the iPhone camera is great. I’ve noticed that cheap phones often use pretty bad cameras though, despite the other hardware being decent. This makes me think that good, small cameras are pretty expensive.

Nokia had a 41 megapixel camera on a phone back in 2012 (the 808, it was awesome). It’s not like larger sensors can’t be done, there’s just not a ton of demand for them…

But the 360 cameras on the market today (at least the ones that connect to a phone…) feel largely like novelties at this point and they’re made to be relatively cheap.

the camera, yes. The phone, fuck no. The UI wass sluggish as all hell due to a horrible processor choice and the screen res was reminiscent of the stone age.

I felt so let down by that phone because I LOVED that camera to death. but didn’t want to put up with the rest of the phone. Then Nokia was all exclusive M$ which turned me off.

If they took a modernized PureView camera on an android and paired it with flagship specs, They have an instant buy from me.

And iirc, they put out an ad for the 808 that was “completely shot on the phone” but by looking at the shadows of the actors, the scene was clearly shot with a dslr.